Home > Research Spotlight Lunches (Every day at 12:30 pm)

Research Spotlight Lunches (Every day at 12:30 pm)

Come for research, stay for lunch! At our well-loved Research Spotlight Lunches, UMD faculty share exciting new advances in their work.
 

January 8th, 2018

Speaker: Matthew J. Goupell (HESP)
Title: Aging and hearing through cochlear implants: What does it tell us about processing degraded sentences?

Abstract:

Age-related speech-understanding deficits are most prominent when signals are degraded by environmental (e.g., background noise, reverberation, accenting) or subject-related factors (e.g., hearing loss). The consensus is that there is a combination of peripheral, central, and cognitive deficits that contribute to age-related speech understanding deficits. Cochlear implants, bionic auditory prostheses that bypass some of the peripheral encoding mechanisms, offer a unique opportunity to more directly investigate the effects of age-related changes to central and cognitive speech processing mechanisms. We will examine, across a variety of studies, degraded speech understanding through cochlear implants and cochlear-implant simulations to determine how cochlear-implant users manage effective real-time sentence processing and if the data inform us about the contribution of central temporal processing deficits.

January 9th, 2018

Speaker: Elizabeth Redcay (PSYC)
Title: Spontaneous mentalizing during real-time social interaction

Abstract: 

Successful communication depends on social cognition. Social partners must make appropriate inferences about each other’s beliefs and intentions through both verbal and nonverbal information and use this information to predict their partner’s responses. However, the study of communication is typically conducted in asocial contexts divorced from social interaction (i.e., “offline” contexts). In the current talk, I use a social-interactive neuroscience approach and argue that the dominant “offline” approach to understanding social interaction has left significant gaps in our knowledge of how the brain engages in social interaction in the real world. Specifically, through a series of experiments I demonstrate that engaging with a social partner in real-time compared to offline leads to spontaneous recruitment of the brain’s “mentalizing” network, even when task demands do not require explicit mentalizing. I close with future directions and open questions on the role this mentalizing network may play in social interactive competence in both typically developing children and children with autism.

January 10th, 2018

Speaker: Pedro Mateo Pedro (LSC, U. del Valle de Guatemala)
Title: Field Station Guatemala: Research and social impact

Abstract:

This presentation will be divided in two parts. In the first part I will briefly talk about the Field Station in Guatemala, emphasizing how work carried out through the station has had a positive social impact on the communities we collaborate with, as well as benefitted students carrying out research projects. In the second part, I will talk about the acquisition of causatives in Q’anjob’al, a Mayan language of Guatemala. I will show that the causative alternation is scarce in the Q’anjob’al child data, in contrast to the acquisition of other Mayan languages such as K’iche’ and Tzotzil. Nevertheless, children acquiring Q’anjob’al do acquire the causative, showing preference for the periphrastic causative over the morphological causative. I suggest that this asymmetry can be attributed to a V1V2 construction found in Q’anjob’al, which Mateo Toledo (2008) shows has broader functions beyond the causative and which is a favored construction in the language. Furthermore, children have yet to acquire all derivational morphology in Q’anjob’al by the time they would be acquiring the causative suffix, resulting as well in the observed asymmetry.

January 11th, 2018

Speaker: Jeff Lidz (LING)
Title: Second Year Syntax

Abstract: TBA

January 12th, 2018

Speaker: Elisa Gironzetti (Dep. of Spanish & Portuguese)
Title: Smiling and the Negotiation of Humor in Conversation

Abstract: 

This presentation focuses on the role of smiling intensity as a non-discrete marker of humor and explains how by means of smiling speakers negotiate what is considered humorous or not during a conversation. In this talk, I will report data from two studies in which speakers engaged in face-to-face (six dyads) and computer-mediated (eight dyads) spontaneous conversations that included various humorous instances (irony, jab lines, and punch lines). The analysis of participants’ verbal productions and facial expressions show that the occurrence of humor correlates positively with an increase of smiling intensity and smiling synchronicity relative to the baseline of the conversation. The presence of humor is foreshadowed by a localized increase of smiling intensity and synchronicity both generally and when humor is predictable (punch lines and humor support). Moreover, in the face-to-face scenario, humor was found to significantly predict participants’ eye-movement behaviors with respect to smiling facial areas: the presence of irony predicted longer fixation time on the interlocutor’s mouth and the absence of humor predicted longer fixation time on the interlocutor’s eyes. The results of these studies are discussed in the context of how humor can be detected, e.g. in line with recent advances and challenges in the use of humor in human-computer interaction, and in the context of humor studies applied to second language teaching and learning.

January 16th, 2018

Speaker: Stefanie Kuchinsky (HESP, MNC)
Title: Assessing listening effort: Pupillometry and neural measures of speech recognition in adverse conditions

Abstract:

Understanding speech in noise often requires considerable effort at the cost of performance on other daily-life tasks. Such effort can vary substantially across listening conditions and across listeners, even when speech recognition accuracy is similar. I will present a series of studies that have used pupillometry to characterize factors that contribute to listening effort, particularly for older adults with hearing loss. Such factors may be external (e.g., signal-to-noise ratio) or internal (e.g., cognitive capacity) to the listener and be stable (e.g., individual differences) or transient (e.g., fluctuations in alertness). I will describe how neuroimaging can be used to validate purported objective measures of effort by linking them with the engagement of specific sensory and executive function systems. I will conclude by discussing future directions for improving clinical assessments and remediations by including measures of listening effort.

January 17th, 2018

Speaker: Natasha J. Cabrera
Title: Fathers (and Mothers) and Children’s Development: Evidence from Early to Middle Childhood

Abstract:

In this talk I provide an overview of the research on fathers and children’s development during the early childhood period. I begin with a (1) brief discussion of the early research on parenting that mostly focused on mothers; (2) provide a policy social context imperative for renew attention to the role of fathers in children’s lives: (3) Highlight several studies from our lab that focus on the contributions that fathers (and mothers) make to their children’s cognitive and social development over in the early years.  

January 18th, 2018

Speaker: Anton Rytting
Title: The Arabic Corpus of Auditory Dictation Errors: a dataset for understanding listening accuracy in learners of Arabic.

Abstract:

Learner corpora provide a useful window into second language acquisition, particularly for common errors for non-native speakers at various levels of language proficiency. Most learner corpora capture language production (written or spoken) but learner listening perception is also important to understand, as listening comprehension errors can lead to communication breakdowns and impede accurate vocabulary acquisition. CASL has collected a dataset designed to shed light on listening errors by learners of Arabic, called the Arabic Corpus of Auditory Dictation Errors (ArCADE).  I will describe how the ArCADE was created, its features, some initial analyses we have done with it, and applications to spelling correction and error-tolerant search.  I will conclude by discussing additional potential uses for the dataset.

January 19th, 2018

Speaker: Jordan Boyd-Graber
Title: Opening the Black Box of Machine Learning: Interactive, Interpretable Interfaces for Exploring Linguistic Tasks

Abstract: Machine learning is ubiquitous, but most users treat it as a black box: a handy tool that suggests purchases, flags spam, or autocompletes text. I present qualities that ubiquitous machine learning should have to allow for a future filled with fruitful, natural interactions with humans: interpretability, interactivity, and an understanding of human qualities. After introducing these properties, I present machine learning applications that begin to fulfill these desirable properties. I begin with a traditional information processing task---making sense and categorizing large document collections---and show that machine learning methods can provide interpretable, efficient techniques to categorize large document collections with a human in the loop. From there, I turn to language-based games that require machines and humans to compete and cooperate and discuss how this can improve and measure interpretability in machine learning.