Home > Events > Chris Manning (Stanford)
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 

Chris Manning (Stanford)

Time: 
Friday, March 10, 2017 - 12:30 PM
Location: 
2114 H.J. Patterson


Note new time--12:30pm!

Defining and Parsing Universal Dependencies 

Abstract: Universal Dependencies is a framework for cross-linguistically consistent dependency grammar analysis and treebank annotation that has so far been applied to over 50 languages. Its primary goals were enabling multilingual natural language processing applications of parsing and natural language understanding, but it has already proven useful for a wider ranger of studies, including in psycholinguistics and language typology. I will first motivate how the design of Universal Dependencies tries to optimize a quite subtle trade-off between a broader range of criteria than linguists usually consider, and that these goals are well met by a simple, fairly spartan, lexicalist approach, which focuses on capturing a level of analysis of (syntactic) grammatical relations. 

I will then turn to parsing. One of the criteria powering the adoption of dependency grammar has been the enormous success of recent dependency parsing models, and I will outline some of our recent work on building neural dependency parsers. Chen and Manning (2014) pioneered the use of neural transition-based dependency parsers, and its success was built on by many others including Google's SyntaxNet (v1) and Parsey McParseFace grammar (Andor et al. 2016). More recently, Dozat and Manning (2017) have further developed very simple graph-based dependency parsing algorithms built using neural attention, extending their performance above the level of (Andor et al. 2016).

Bio: Christopher Manning is a professor of computer science and linguistics at Stanford University. He works on software that can intelligently process, understand, and generate human language material.  He is a leader in applying Deep Learning to Natural Language Processing, including exploring Tree Recursive Neural Networks, sentiment analysis, neural network dependency parsing, the GloVe model of word vectors, neural machine translation, and deep language understanding. He also focuses on computational linguistic approaches to parsing, robust textual inference and multilingual language processing, including being a principal developer of Stanford Dependencies and Universal Dependencies. Manning is an ACM Fellow, a AAAI Fellow, an ACL Fellow, and a Past President of ACL. He has coauthored leading textbooks on statistical natural language processing and information retrieval. He is a member of the Stanford NLP group (@stanfordnlp) and manages development of the Stanford CoreNLP software.