Logical Languages Accepted by Transformer Encoders with Hard Attention

Authors: Pablo Barcelo, Alexander Kozachinskiy, Anthony Widjaja Lin, Vladimir Podolskii

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We contribute to the study of formal languages that can be recognized by transformer encoders. We focus on two self-attention mechanisms: (1) UHAT (Unique Hard Attention Transformers) and (2) AHAT (Average Hard Attention Transformers). UHAT encoders are known to recognize only languages inside the circuit complexity class AC0... On the other hand, AHAT encoders can recognize languages outside AC0... We first show a negative result that there is an AC0-language that cannot be recognized by an UHAT encoder. On the positive side, we show that UHAT encoders can recognize a rich fragment of AC0-languages... We then show that AHAT encoders can recognize all languages of our logic even when we enrich it with counting terms.
Researcher Affiliation Academia Pablo Barcel o IMC, PUC Chile & IMFD Chile & CENIA Santiago, Chile pbarcelo@uc.cl Alexander Kozachinskiy IMFD and CENIA Santiago, Chile. alexander.kozachinskyi@cenia.cl Anthony Widjaja Lin University of Kaiserslautern-Landau & Max-Planck Institute for Software Systems Kaiserslautern, Germany awlin@mpi-sws.org Vladimir Podolski Tufts University Medford, USA vladimir.podolskii@tufts.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper is theoretical and does not conduct experiments on datasets, therefore no public dataset information is provided.
Dataset Splits No The paper is theoretical and does not conduct experiments on datasets, therefore no dataset split information is provided.
Hardware Specification No The paper is theoretical and does not describe experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe experiments, therefore no software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not describe experiments, therefore no experimental setup details are provided.