Home > Events > LSLT: Jordan Boyd-Graber Ying (CS/UMIACS/LSC)
S M T W T F S
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 

LSLT: Jordan Boyd-Graber Ying (CS/UMIACS/LSC)

Time: 
Thursday, September 14, 2023 - 12:15 PM to 1:30 PM
Location: 
Language Science Center (2130 H.J. Patterson Hall)

 

LSLT: Upon Reflection. This semester, we've asked presenters to give reflective talks about their most prominent work from the past. How did they understand the question then, and how do they see it now? See the full lineup here! Lunch will be served starting at 12:15. Vegetarian options available. Let us know if you have other dietary restrictions!

This week: Jordan Boyd-Graber, Associate Professor in Computer Science, UMIACS, and LSC

If We Want AI to be Interpretable, We Need to Measure Interpretability

Abstract: AI tools are ubiquitous, but most users treat it as a black box: a handy tool that suggests purchases, flags spam, or autocompletes text. While researchers have presented explanations for making AI less of a black box, a lack of metrics make it hard to optimize explicitly for interpretability. Thus, I propose two metrics for interpretability suitable for unsupervised and supervised AI methods.
For unsupervised topic models, I discuss our proposed "intruder" interpretability metric, how it contradicts the previous evaluation metric for topic models (perplexity), and discuss its uptake in the community over the last decade. For supervised question answering approaches, I show how human-computer cooperation can be measured and directly optimized by a multi-armed bandit approach to learn what kinds of explanations help specific users.

Relevant papers: