Home » Publication » 27879

Dettaglio pubblicazione

2023, International Conference on Learning Representations, Pages -

Attention-likelihood relationship in transformers (04h Atto di convegno in rivista scientifica o di classe A)

Ruscio Valeria, Maiorca Valentino, Silvestri Fabrizio

We analyze how large language models (LLMs) represent out-of-context words, investigating their reliance on the given context to capture their semantics. Our likelihood-guided text perturbations reveal a correlation between token likelihood and attention values in transformer-based language models. Extensive experiments reveal that unexpected tokens cause the model to attend less to the information coming from themselves to compute their representations, particularly at higher layers. These findings have valuable implications for assessing the robustness of LLMs in real-world scenarios. Fully reproducible codebase at https://github.com/Flegyas/AttentionLikelihood.
keywords
© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma