Home > Events > CLIP Colloquium: Claire Bonial (ARL)
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 
 
 

CLIP Colloquium: Claire Bonial (ARL)

Time: 
Wednesday, September 19, 2018 - 11:00 AM to 12:00 PM
Location: 
4172 A.V. Williams Building


Title: Event Semantics in Text Constructions, Vision, and Human-Robot Dialogue

Abstract: “Ok, robot, make a right and take a picture” – a simple instruction like this exemplifies some of the obstacles in our research on human-robot dialogue: how are make and take to be interpreted? What precise actions should be executed?  In this presentation, I explore three challenges: 1) interpreting the semantics of constructions in which verb meanings are extended in novel usages, 2) recognizing activities and events in images/video by employing information about the objects and participants typically involved, and 3) mapping natural language instructions to the physically situated actions executed by a robot. Throughout these distinct research areas, I leverage both Neo-Davidsonian styles of event representation and the principles of Construction Grammar in addessing these challenges for interpretation and execution.

Bio: Claire Bonial is a computational linguist specializing in the murky world of event semantics.  In her efforts to make this world computationally tractable, she has collaborated on a variety of Natural Language Processing semantic role labeling projects, including PropBank, VerbNet, and Abstract Meaning Representation.  A focused contribution to these projects has been her theoretical and psycholinguistic research on both the syntax and semantics of English light verb constructions (e.g., take a walk, make a mistake).  Bonial received her Ph.D. in Linguistics and Cognitive Science in 2014 from the University of Colorado Boulder.  She began her current position in the Computational and Information Sciences Directorate of the Army Research Laboratory (ARL) in 2015.  Since joining ARL, she has expanded her research portfolio to include multi-modal representations of events (text and imagery/video), as well as human-robot dialogue.