Search
Login

A Transformer with Stack Attention

Author
Jiaoda Li, Jennifer C. White, Mrinmaya Sachan, Ryan Cotterell
Category
cs.CL
Date Published
2024/05/07
Date Retrieved
2024/05/08
Date Updated
2024/05/08
Description
Natural languages are believed to be (mildly) context-sensitive. Despite underpinning remarkably capable large language models, transformers are unable to model many context-free language tasks. In an attempt to address this limitation in the modeling power of transformer-based language models, we propose augmenting them with a differentiable, stack-based attention mechanism. Our stack-based attention mechanism can be incorporated into any transformer-based language model and adds a level of interpretability to the model. We show that the addition of our stack-based attention mechanism enables the transformer to model some, but not all, deterministic context-free languages.
Posts
4
Readers
0
Score
1
Tweeters
4
TOP