Winter Storm 2026
Winter Storm 2026 was hosted on Tuesday, January 20th through Friday, January 23rd, 2026!
Session times and details are listed at the bottom of the page. Thank you to everyone who participated and attended this year's event.
Schedule
Tuesday, January 20th, 2026
9:00 AM to 11:00 AM
Writing Time • If you're interested in some extra accountability and productivity, join a Winter Storm 2026 writing group.
11:00 AM to 12:00 PM
Workshop Series: MEG Series • Session Speaker: Ellen Lau (LING)
12:00 PM to 1:00 PM
Lunch and Social
1:00 PM to 2:00 PM
Co-operating mechanisms of human sentence comprehension • Visiting Speaker: John Hale (Johns Hopkins University)
3:00 PM to 4:00 PM
Roundtable Discussion: Perspectives on early language access for deaf and hard of hearing children in hearing families
Wednesday, January 21st, 2026
9:00 AM to 11:00 AM
Writing Time • If you're interested in some extra accountability and productivity, join a Winter Storm 2026 writing group.
11:00 AM to 12:00 PM
Leveraging Technology for Communicative and Task-Based Language Learning • Visiting Speaker: Lara Bryfonski (Georgetown University)
12:00 PM to 1:00 PM
Lunch and Social
1:00 PM to 2:30 PM
Workshop Series: MEG Series • Session Speakers: Charlie Fisher (ECE) and Ciaran Stone (NACS)
3:00 PM to 4:00 PM
The tool I’m obsessed with
Thursday, January 22nd, 2026
9:00 AM to 11:00 AM
Writing Time • If you're interested in some extra accountability and productivity, join a Winter Storm 2026 writing group.
11:00 AM to 12:00 PM
Thematic Relations: Shared Constraints in the Mind Across Vision and Language • Visiting Speaker: Alon Hafri (University of Delaware)
12:00 PM to 1:00 PM
Lunch and Social
1:00 PM to 2:30 PM
Workshop Series: MEG Series • Session Speaker: Vrishab Commuri (ECE)
Hands on with the EP Toolkit • Session Leader: Joe Dien (HDQM)
3:00 PM to 4:00 PM
Transitioning your IRB protocols from IRBNet to Kuali • Session Leader: Jamie Smith (CHSE)
Friday, January 23rd, 2026
9:00 AM to 11:00 AM
Writing Time • If you're interested in some extra accountability and productivity, join a Winter Storm 2026 writing group.
11:00 AM to 12:00 PM
Exploring the psycholinguistics of syntactic variation • Visiting Speaker: Cynthia Lukyanenko (George Mason University)
12:00 PM to 1:00 PM
Lunch and Social
1:00 PM to 2:30 PM
Workshop: Automatic transcription and forced alignment • Session Leader: Ciaran Stone (NACS)
Transportation
Parking
For those traveling from outside of the university, parking is available within close proximity to the Language Science Center at Regents Drive Garage.
Talks & Speakers
Co-operating mechanisms of human sentence comprehension • John Hale (Johns Hopkins University)
Tuesday, January 20th • 1:00 PM to 2:00 PM
Abstract: Is it possible to resolve a complicated human ability like sentence processing into simpler sub-functions? This talk undertakes to do so, considering both (1) the way that successive words fit into candidate grammatical structures and (2) retrieval, from memory, of information about earlier words that stand in grammatical relations with the current word. Reanalyzing MEG data from Brodbeck et al. 2022 we find support for the idea that these two mechanisms are realized in different parts of the brain's language network. Modeling these mechanisms in an interpretable way contributes to a cognitive account of how people use linguistic information from the left context during naturalistic language comprehension.
About: Dr. Hale is a computational linguist interested in human language processing at the sentence level. Current work involves modeling observed neural signals, from across languages using incremental parsing algorithms and other natural-language processing tools.
Leveraging Technology for Communicative and Task-Based Language Learning • Lara Bryfonski (Georgetown University)
Wednesday, January 21st • 11:00 AM to 12:00 PM
Abstract: Given the explosive growth of technology in second language (L2) classrooms and the rapid expansion of digitally mediated instruction, there has been growing interest in how technology can be meaningfully integrated with research-based communicative approaches like task-based language teaching (TBLT). TBLT is founded on the idea that language learning tasks should reflect real-world language use and create a genuine need to communicate in the target language (Long, 2005; 2015). As technology increasingly mediates how learners interact, collaborate, and receive feedback in their target language(s), it is critical to explore how technology shapes task design, interactional processes, and learning outcomes.
About: Lara Bryfonski is an applied linguist and second language acquisition researcher. Her research interests include: second language learning and pedagogy, task-based language teaching, interaction and corrective feedback, language teacher training, individual differences in second language learning, language learning in study abroad, second language research methods and meta-analysis.
Her book, The Art and Science of Language Teaching (co-authored with Alison Mackey) was recently published by Cambridge University Press. Check it out!
She is an associate professor in the Department of Linguistics at Georgetown University where she teach courses on these topics and advise graduate and undergraduate linguistics students.
Thematic Relations: Shared Constraints in the Mind Across Vision and Language • Alon Hafri (University of Delaware)
Thursday, January 22nd • 11:00 AM to 12:00 PM
Abstract: Language and vision, two domains central to cognitive science, are often studied independently. Yet there is increasing evidence that both encode the world in terms of relations and roles (e.g., Agent, Patient, Figure, Reference). This raises a fundamental question: to what extent do these systems share not just representational ingredients, but constraints on what can be represented and how? In this talk, I explore the possibility that certain constraints on linguistic representation reflect deeper constraints in non-linguistic cognition, particularly high-level vision. I first present work suggesting that visual scene understanding is, in some cases, compositional: relational representations (e.g., a vase on a table) are constructed sequentially and in a canonical order that mirrors how relations are conceptually composed in sentence interpretation. I then show that visual cognition also exhibits limits on which relations can be simultaneously represented. Using the spatial relations IN and NEAR, I show that rapid proximity judgments (a prerequisite for NEAR evaluations) are selectively impaired when objects instantiate containment, paralleling the linguistic oddness of describing a ball as “near” a box when it is “inside” it. Finally, I discuss constraints on verb meanings in terms of conceptual and syntactic role mappings. Languages appear to systematically lack “inverse verbs” that reverse the canonical Agent–Subject / Patient–Object mapping, and I present cross-linguistic evidence from adult learners that verbs with such a mapping are difficult to learn. These findings open up the possibility that such linking generalizations reflect either properties of the mental grammar or more basic constraints on event representation—possibilities we are currently exploring. Taken together, these findings suggest that some constraints on linguistic structure may have non-linguistic origins, reflecting shared limits on relational representation across language and vision.
About: Alon Hafri, Ph.D., is an assistant professor with the Department of Linguistics and Cognitive Science at the University of Delaware. He received his Ph.D. in psychology in 2019 from the University of Pennsylvania.
Hafri explores connections between language and vision in the mind, using behavioral and neuroimaging techniques to do so. Prior to UD, he was a postdoctoral research fellow in the Department of Psychological & Brain Sciences and the Department of Cognitive Science, both at Johns Hopkins University. Outside of research, Alon makes homemade beer, soup and soap (and only rarely confuses the three).
Exploring the psycholinguistics of syntactic variation • Cynthia Lukyanenko (George Mason University)
Friday, January 23rd • 11:00 AM to 12:00 PM
Abstract: Language varies both within and between individuals. Sociolinguistic research shows that language users have extensive knowledge of the social significance of variation in their communities, but how is linguistic knowledge of variation structured, acquired and used? In this talk, I discuss two lines of work that contribute to understanding the structure and source of language users’ knowledge of syntactic variation. One explores Mainstream US English users’ comprehension and processing of negative concord structures (e.g., I didn’t say nothing ‘I said nothing’). The other explores children’s acquisition of variable and categorical patterns in English subject-verb agreement (e.g., there’s flowers, flowers are pretty).
About: Cynthia Lukyanenko is an Assistant Professor of Linguistics in the Department of English at George Mason University and the Director of Processing and Acquisition of Language at Mason (PALM). She studies adults' and children's use of morphological and morphosyntactic cues during comprehension, and teaches courses on research methods and language acquisition.
Her research has explored adults' and preschoolers’ knowledge of plural morphology, subject-verb agreement, and constraints on pronoun coreference. In recent work, she explores how children's acquisition of these aspects of language changes if the input they receive is variable, and how linguistic variation influences adults' real-time comprehension.
Before coming to Mason, she conducted postdoctoral research in the Department of Spanish, Italian and Portuguese at Penn State. She earned her MA and PhD in Developmental Psychology from the University of Illinois Urbana-Champaign, and before that, her BA in Linguistics from the University of Maryland College Park.
Methods Workshops
MEG Series • Ellen Lau (LING), Charlie Fisher (ECE), Ciaran Stone (NACS), Vrishab Commuri (ECE)
Tuesday, January 20th through Thursday, January 22nd • Various TImes
Learn about using MEG for research in cognitive neuroscience and psycholinguistics.
Tuesday, January 20th, 11:00 AM to 12:00 PM • Ellen Lau (LING)
MEG and what you can do with it
UMD provides a special opportunity to language scientists in being one of the few American universities to maintain a research-dedicated magnetoencephalography lab—MEG. But thirty years after whole-head MEG neural recordings became widely available, many cognitive scientists still don’t quite know what to think about it. We've heard the tagline ‘MEG provides better combined spatial and temporal resolution than EEG or fMRI’, but what can we actually learn with it that we couldn’t with EEG or fMRI—why is it worth the trouble? In this session we’ll talk about MEG basics with a focus on what are the particularly valuable things it can offer to language scientists and what’s involved (at a practical level) in getting a project going.
Wednesday, January 21st, 1:00 PM to 2:30 PM • Charlie Fisher (ECE) and Ciaran Stone (NACS)
Neural Signal Analysis
In this session, we dive into how to analyze neural data using common methods such as evoked response potentials (ERP) and temporal response functions (TRF). ERP and TRF reveal time-locked neural responses to stimuli. In this workshop, we will use Google Colab and provide a pipeline for analyzing MEG data, from raw data to analyzable signals. This session will hands-on and places an emphasis on different ways of analyzing neural signals. We will also discuss interesting preprocessing methods, such as artifact detection using EMG.
Thursday, January 22nd, 1:00 PM to 2:30 PM • Vrishab Commuri (ECE)
Functional Connectivity Analysis with MEG
The brain forms functional networks by synchronizing neural activity across various regions during rest and task performance. Functional connectivity provides a framework for analyzing these interactions, allowing us to interrogate how these regions influence one another within and between neural circuits. We will discuss basic methods for performing functional connectivity analyses using MEG data. Attendees will learn of the basic functions of neural activity in specific frequency bands and cortical areas. We will discuss how neural current sources constellate and how these network configurations change with task demands. Basic approaches will be covered, primarily focusing on Granger Causality, but the discussion will be directed towards practical examples, with hands-on implementation in Python using Google Colab (TBD).
Hands on with the EP Toolkit • Joe Dien (HDQM)
Thursday, January 22nd • 1:00 PM to 2:30 PM
The EP Toolkit (Dien, 2010) is a full-featured open-source Matlab EEG analysis suite (https://sourceforge.net/projects/erppcatoolkit/) used by hundreds of researchers around the world. It has particular strengths in PCA (Dien, 2012), artifact correction (Dien, 2024), and robust ANOVA analysis (Dien, 2017). Bring your EEG data (practice data available) and learn how to use the EP Toolkit to get the most out of it.
Automatic transcription and forced alignment • Ciaran Stone (NACS)
Friday, January 23rd • 1:00 PM to 2:30 PM
Who doesn't love spending hundreds of hours meticulously transcribing spoken words and phonemes from an audio file into a plain text file by hand? If you like how much time it takes, this session is not for you. We will look at how to set up and start using two automatic transcription tools for language sample analysis: the Montreal Forced Aligner and batchalign2 (TalkBank). Zero coding experience is assumed, but please do bring an internet-connected computer. The session will include discussion of things you can do with a transcribed language sample, such as generating word- and phoneme-level surprisal values for a TRF analysis, or executing CLAN commands for analysis of language samples from pre-recorded audio or video interactions.
Sessions
Roundtable discussion: Perspectives on early language access for deaf and hard of hearing children in hearing families
Tuesday, January 20th • 3:00 PM to 4:00 PM
An estimated 210 deaf and hard of hearing (DHH) children are born in Maryland every year, 90-95% of them to hearing families who have never met a deaf person before. Among the pressing challenges these families face is the decision of which language(s) to use with their DHH child: spoken English through hearing technology like cochlear implants? a natural sign language like ASL? both? As the home to the Maryland Cochlear Implant Center of Excellence, an expanding ASL program, and collaborations in sign language linguistics with nearby Gallaudet University, UMD can offer many resources for parents of DHH children as they navigate the daunting questions of language choice. This exploratory roundtable is designed as a forum for those with diverse perspectives on this topic to share their knowledge, research, and experience with each other. We hope this will be the start of a series of dialogues and/or a reading group, to allow continued exchanges and potential collaborations on how to provide accessible and balanced information for families of DHH children about both ASL and cochlear implants, as well as raise general awareness about deafness and the importance of early language access for all children.
Potential topics for discussion (not an exhaustive list, but a start!):
- Bimodal bilingualism: What is it, how does it compare to Spoken Language bilingualism, how feasible is it for DHH children and their hearing families to achieve?
- Perspectives of speech clinicians: what are the tools that you use to teach spoken language to pre-lingual CI users? What are the challenges?
- CI users in bilingual environments: what are the outcomes for spoken language acquisition? What is the advice given by speech & hearing community to families in these circumstances?
- Is there support for spoken bilingualism?
- Effectiveness of CIs in real-world listening activities
**Please read the following very short article and bring your reactions to the roundtable discussion**
The tool I’m obsessed with
Wednesday, January 21st • 3:00 PM to 4:00 PM
Students and faculty are eager to tell you about the tool they’re obsessed with. This session will feature lightning talks on note-taking tools, an R package, and Gantt charts!
Transitioning your IRB protocols from IRBNet to Kuali • Jamie Smith (CHSE)
Thursday, January 22nd • 3:00 PM to 4:00 PM
Have you been closely reading all those emails about the transition from IRBNet to Kuali? Yeah, we haven’t either. Come learn everything you need to know!
Also at Winter Storm
Writing Time
Tuesday, January 20th through Friday, January 23rd • 9:00 AM to 11:00 AM
Come get your writing done at the LSC. If you’re interested in some extra accountability and productivity, join a writing group.
Last quarter mile projects
When you start a new project, it often takes years to get from the idea and design, to data collection and analysis, to publication. What if you could start at the end, and submit a publication within months?
The LSC asked faculty to tell us about their "last quarter mile" projects–partly or mostly done, but needing a bit more work to get to the finish line–as well as existing datasets that could support new exploration and analysis. These projects are great opportunities for students to gain valuable experience in collaboration, data analysis, and publication writing.
We will be facilitating meetings between PIs and interested students during Winter Storm. If you’re faculty or a student interested in future matchups, contact shevaun@umd.edu for more information.
