Home > Events > CLIP Colloquium: Eric Wallace (UC Berkeley)
S M T W T F S
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 
 

CLIP Colloquium: Eric Wallace (UC Berkeley)

Time: 
Tuesday, February 14, 2023 - 11:00 AM to 12:00 PM
Location: 
4105 Brendan Iribe Center

 

Memorization in Large Language Models

Abstract: Modern NLP is dominated by scale---today's language models (LMs) use supermassive parameter counts, dataset sizes, and compute budgets. In this talk, I will show that large LMs "memorize” their training data in various settings. This can sometimes be beneficial, e.g., memorization allows models to learn and recall knowledge from their pre-training data when solving downstream tasks. On the other hand, memorization can lead to legal concerns (e.g., generating copyright data, outputting medical documents) and an over-reliance on memorization can lead to failures in reasoning for novel tasks and inputs. Throughout the talk, I will give a particular focus on actionable insights that we can derive from these analyses, especially with respect to training strategies, model architectures, and dataset design.

Papers discussed: https://arxiv.org/abs/2012.07805, https://arxiv.org/abs/2202.06539, https://arxiv.org/abs/2207.00099, https://arxiv.org/abs/2211.08411, https://arxiv.org/abs/2301.13188, and a final WIP paper.

Bio: Eric Wallace is a 4th year PhD student at UC Berkeley advised by Dan Klein and Dawn Song. His research interests are in making large language models more robust, trustworthy, secure, and private. Eric's work is supported by the Apple Fellowship in AI/ML, and in the past he has been at FAIR, AI2, and the University of Maryland.