Loading Events

VASC Seminar

November

3
Tue
Nicholas Carlini Research Scientist Google
Tuesday, November 3
11:00 am to 12:00 pm
Deep Learning: (still) Not Robust

Abstract:

One of the key limitations of deep learning is its inability to generalize to new domains. This talk studies recent attempts at increasing neural network robustness to both natural and adversarial distribution shifts.

Robustness to adversarial examples, inputs crafted specifically to fool machine learning models, are arguably the most difficult type of domain shift. We study 13 recently proposed defenses at ICLR, ICML, and NeurIPS and find that all can be evaded and offer  nearly no improvement on top of the undefended baselines. Worryingly, we are able to break these defenses without any new attack techniques.

It’s not just adversarially-constructed distribution shifts that cause neural networks to suffer: they also don’t generalize across natural distribution shifts that occur in completely benign settings—like re-sampling a new test set. Despite the many proposed techniques to increase synthetic robustness, almost none improve robustness across four natural distribution shifts.

Robustness is still a challenge for deep learning, and one that will require extensive work to solve.

 

 

BIO:

Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at IEEE S&P and ICML. He obtained his PhD from the University of California, Berkeley in 2018.

 

 

Homepage:  https://nicholas.carlini.com/

 

 

 

Sponsored in part by:   Facebook Reality Labs Pittsburgh