IsarStep: a Benchmark for High-level Mathematical Reasoning
Authors: Wenda Li, Lei Yu, Yuhuai Wu, Lawrence C. Paulson
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments and analysis reveal that while the task is challenging, neural models can capture non-trivial mathematical reasoning. We further design a hierarchical transformer that outperforms the transformer baseline. ... Table 1: Test set accurarcies (exact match) and BLEU scores of different models on the Isar Step task. |
| Researcher Affiliation | Collaboration | Wenda Li University of Cambridge wl302@cam.ac.uk, Lei Yu Deep Mind leiyu@google.com, Yuhuai Wu University of Toronto, Vector Institute ywu@cs.toronto.edu, Lawrence C. Paulson University of Cambridge lp15@cam.ac.uk |
| Pseudocode | No | The paper describes approaches and models like hierarchical transformer but does not include any specific pseudocode or algorithm blocks. |
| Open Source Code | Yes | The dataset and models are available from: https://github.com/ Wenda302/Isar Step. |
| Open Datasets | Yes | We have built the Isar Step dataset by mining arguably the largest publicly-hosted repository of mechanised proofs: the Achieve of Formal Proofs (AFP).1 The AFP is checked by the Isabelle proof assistant (Paulson, 1994) and contains 143K lemmas. ... The dataset and models are available from: https://github.com/ Wenda302/Isar Step. |
| Dataset Splits | Yes | The final dataset split is 820K, 5000, 5000 for the training, validation, and test sets, respectively. |
| Hardware Specification | Yes | Training the transformer and HAT takes 72 hours on 4 Tesla-V100 GPUs. |
| Software Dependencies | No | For RNNSearch4 (Bahdanau et al., 2015; Wu et al., 2016), we use 2-layer LSTMs (Hochreiter & Schmidhuber, 1997) with 512 hidden units and 0.2 dropout rate. The hyperparameters for training the transformer5 are the same as transformer base (Vaswani et al., 2017), i.e. 512 hidden size, 2048 filter size, 8 attention heads, and 6 layers for both the encoder and decoder. ... We use the default Sledgehammer (Blanchette et al., 2011) method in Isabelle as our automatic theorem prover for checking derivations. |
| Experiment Setup | Yes | For RNNSearch4 (Bahdanau et al., 2015; Wu et al., 2016), we use 2-layer LSTMs (Hochreiter & Schmidhuber, 1997) with 512 hidden units and 0.2 dropout rate. The hyperparameters for training the transformer5 are the same as transformer base (Vaswani et al., 2017), i.e. 512 hidden size, 2048 filter size, 8 attention heads, and 6 layers for both the encoder and decoder. The hyperparameters for HAT are the same, except that the number of local context layers is 4 and global context layers is 2. We share the source and target token embeddings for all the three models. We use beam search decoding with beam size 5 (for top1 accuracies) and 10 (for top10 accuracies). The configurations for different models are the best ones we found based on validation performance. We train these models for 100K steps and pick the checkpoint with the best BLEU on the validation set to evaluate on the test set. |