Home > Events > CLIP Colloquium: Alvin Grissom (Haverford College)
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 

CLIP Colloquium: Alvin Grissom (Haverford College)

Time: 
Wednesday, March 24, 2021 - 11:00 AM to 12:00 PM
Location: 
https://umd.zoom.us/j/93207947099?pwd=c096Z3JrZ1FGSXVEVjFWL29PQUV1dz09

 

Examining Racially Biased Language within a Large Corpus American Football Commentary

Abstract: In the first part of this talk, I will describe our work, published in EMNLP 2019 (with another CLIP alumnus, Mohit Iyyer), on examining racially biased language in sports commentary.  In this work, we construct the multi-decade FOOTBALL dataset of American football commentary, wherein player information is annotated by race.  Our analysis strongly suggests that commentators use, on average, significantly different language to describe white versus nonwhite athletes.
 
In the second part of the talk, I will talk a bit about what I have learned about advising undergraduates in computational linguistics research and doing research at a liberal arts college.  I will describe some of the work I have advised and some projects in which I'm currently involved.

Bio: A former member of UMD's CLIP lab, Alvin Grissom II is an Assistant Professor of Computer Science at Haverford College, a small, undergraduate liberal arts college in the Philadelphia area.  He completed his Ph.D. in 2017 from the University of Colorado Boulder, advised by Jordan Boyd-Graber.  He did early work on machine learning-based simultaneous machine translation and verb prediction in verb-final languages. More recently, he has been involved in work on examining pathologies in neural models, examining racially biased language in sports commentary, and verb prediction in verb-final languages.  Lately, he is, in general, interested in connecting computational approaches to linguistic insight, including psycholinguistics and the neuroscience of language, and in examining the limits of models in learning and capturing linguistic phenomena.