Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Logical Languages Accepted by Transformer Encoders with Hard Attention
Authors: Pablo Barcelo, Alexander Kozachinskiy, Anthony Widjaja Lin, Vladimir Podolskii
ICLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We contribute to the study of formal languages that can be recognized by transformer encoders. We focus on two self-attention mechanisms: (1) UHAT (Unique Hard Attention Transformers) and (2) AHAT (Average Hard Attention Transformers). UHAT encoders are known to recognize only languages inside the circuit complexity class AC0... On the other hand, AHAT encoders can recognize languages outside AC0... We first show a negative result that there is an AC0-language that cannot be recognized by an UHAT encoder. On the positive side, we show that UHAT encoders can recognize a rich fragment of AC0-languages... We then show that AHAT encoders can recognize all languages of our logic even when we enrich it with counting terms. |
| Researcher Affiliation | Academia | Pablo Barcel o IMC, PUC Chile & IMFD Chile & CENIA Santiago, Chile EMAIL Alexander Kozachinskiy IMFD and CENIA Santiago, Chile. EMAIL Anthony Widjaja Lin University of Kaiserslautern-Landau & Max-Planck Institute for Software Systems Kaiserslautern, Germany EMAIL Vladimir Podolski Tufts University Medford, USA EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments on datasets, therefore no public dataset information is provided. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments on datasets, therefore no dataset split information is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe experiments, therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe experiments, therefore no software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and does not describe experiments, therefore no experimental setup details are provided. |