Skip to main content
Skip to main content

Brian Dillon / Insights at the Memory-Syntax Interface in Humans and Language Models

Photo from LING Colloquium: Brian Dillon (UMass-Amherst)

Brian Dillon / Insights at the Memory-Syntax Interface in Humans and Language Models

Linguistics | Maryland Language Science Center Wednesday, March 11, 2026 12:30 pm - 1:30 pm H.J. Patterson Hall

Wednesday March 11, Brian Dillon *11 is back to give a talk about psycholinguistics and LLMs, comparing what we know about the use of working memory in real-time human language comprehension, with what we know about the states of LLMs that support next word prediction.  


How do we encode structured linguistic objects in memory to support real-time language comprehension? And what does this tell us about the language-memory interface, in humans and human-inspired machines? In the first part of this talk, I will share some recent insights into how language comprehenders make use of their working memory resources to encode and interpret linguistic input in real-time, drawing largely on work from our lab using a variety of computational and behavioral techniques.

In the second part of this talk, I will suggest these insights help to narrow the considerable gap between how the language-memory interface is conceived of in humans, and in language models. This creates exciting opportunities at the intersection of artificial intelligence and psycholinguistics: Advances in AI have produced theoretical artifacts that allow us to ask questions about language processing at scale, an extremely exciting development. I will make this argument with work from our lab on the processing of garden path sentences, as well as how working memory constraints impact language learning and processing in a resource rational language learning setting. These results advance our understanding of human language processing, but they also underline that the project of human-AI alignment at the level of how languages are learned and represented is still nascent.

But I will argue that this alignment is worth pursuing, both for theoretical and practical reasons. I will close by talking about in-progress work that aims to make good on this claim in applications-oriented contexts, exploring questions of user modeling, memory constraints, and human-like understanding of context in the name of enhanced accessibility.

Add to Calendar 03/11/26 12:30:00 03/11/26 13:30:00 America/New_York Brian Dillon / Insights at the Memory-Syntax Interface in Humans and Language Models

Wednesday March 11, Brian Dillon *11 is back to give a talk about psycholinguistics and LLMs, comparing what we know about the use of working memory in real-time human language comprehension, with what we know about the states of LLMs that support next word prediction.  


How do we encode structured linguistic objects in memory to support real-time language comprehension? And what does this tell us about the language-memory interface, in humans and human-inspired machines? In the first part of this talk, I will share some recent insights into how language comprehenders make use of their working memory resources to encode and interpret linguistic input in real-time, drawing largely on work from our lab using a variety of computational and behavioral techniques.

In the second part of this talk, I will suggest these insights help to narrow the considerable gap between how the language-memory interface is conceived of in humans, and in language models. This creates exciting opportunities at the intersection of artificial intelligence and psycholinguistics: Advances in AI have produced theoretical artifacts that allow us to ask questions about language processing at scale, an extremely exciting development. I will make this argument with work from our lab on the processing of garden path sentences, as well as how working memory constraints impact language learning and processing in a resource rational language learning setting. These results advance our understanding of human language processing, but they also underline that the project of human-AI alignment at the level of how languages are learned and represented is still nascent.

But I will argue that this alignment is worth pursuing, both for theoretical and practical reasons. I will close by talking about in-progress work that aims to make good on this claim in applications-oriented contexts, exploring questions of user modeling, memory constraints, and human-like understanding of context in the name of enhanced accessibility.

H.J. Patterson Hall false