CLIP Talk - Tom Hartvigsen
CLIP Talk - Tom Hartvigsen
Hear from Tom Hartvigsen (University of Virginia) at CLIP's upcoming talk! Click here to attend virtually.
Continually Editing Vision-Language Models
Abstract: Despite their incredible performance on hard machine learning tasks, deployed language models will always make mistakes due to ever-changing data, labels, and user needs. And for any given task, expert users are most-likely to find errors, yet are typically excluded from model development and upkeep. Even worse, as models get bigger, our classic ways to update models (like retraining) become intractable, especially when errors may arise sequentially.
In this talk, I will describe some of our recent progress on making targeted “edits” to big, pre-trained models. I will first describe a general approach to editing language models thousands of times without modifying their weights, then describe recent extensions into editing vision-language models. Then I will share some of our recent and ongoing work on letting dermatologists directly edit the multi-modal AI models they are already starting to use, ultimately aiming to let expert users gain ownership over their models by participating in model development in maintenance.
Bio: Tom Hartvigsen is an Assistant Professor of Data Science at the University of Virginia. He leads a research group that works to develop machine learning methods that are trustworthy, robust, and responsible enough for model deployment in high-stakes, ever-changing settings, especially those in healthcare. Tom's group regularly publishes in the top publication venues for Machine Learning, NLP, and medicine. Before joining UVA, Tom was a Postdoc at MIT CSAIL and he received his PhD in Data Science from Worcester Polytechnic Institute.