Home > Events > CLIP Colloquium: Isabelle Augenstein (U of Copenhagen)
S M T W T F S
 
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 
 
 
 
 
 

CLIP Colloquium: Isabelle Augenstein (U of Copenhagen)

Time: 
Wednesday, December 08, 2021 - 11:00 AM to 12:00 PM
Location: 
4105 Brendan Iribe Center AND online

Zoom: https://umd.zoom.us/j/98806584197?pwd=SXBWOHE1cU9adFFKUmN2UVlwUEJXdz09

 

Accountable and Robust Automatic Fact Checking

Abstract: The past decade has seen a substantial rise in the amount of mis- and disinformation online, from targeted disinformation campaigns to influence politics, to the unintentional spreading of misinformation about public health. This development has spurred research in the area of automatic fact checking, a knowledge-intensive and complex reasoning task. Most existing fact checking models predict a claim's veracity with black-box models, which often lack explanations of the reasons behind their predictions and contain hidden vulnerabilities. The lack of transparency in fact checking systems and ML models, in general, has been exacerbated by increased model size and by "The right...to obtain an explanation of the decision reached" enshrined in European law. This talk presents some first solutions to generating explanations for fact checking models. It further examines how to assess the generated explanations using diagnostic properties, and how further optimising for these diagnostic properties can improve the quality of the generating explanations. Finally, the talk examines how to systemically reveal vulnerabilities of black-box fact checking models.
 
Bio: Isabelle Augenstein is an Associate Professor at the University of Copenhagen, Department of Computer Science, where she heads the Copenhagen Natural Language Understanding research group as well as the Natural Language Processing section. She also co-heads the research team at CheckStep Ltd, a content moderation start-up. Her main research interests are fact checking, low-resource learning, and explainability. Prior to starting a faculty position, she was a postdoctoral researcher at University College London, and before that a PhD student at the University of Sheffield.
She currently holds a prestigious DFF Sapere Aude Research Leader fellowship on 'Learning to Explain Attitudes on Social Media'. She is also president of the ACL Special Interest Group on Representation Learning (SIGREP).