Loading…
Thursday, January 24 • 4:40pm - 5:00pm
The Myth of the interpretable, Robust, Compact and High Performance Deep Neural Network

Log in to save this to your schedule and see who's attending!

Feedback form is now closed.
Most progress in machine learning has been measured according to gains in test-set accuracy on tasks like image recognition. However, test-set accuracy appears to be poorly correlated with other design objectives such as interpretability, robustness to adversarial attacks or training compact networks that can be used in resource constrained environments. This talk will ask whether it is possible to have it all, and more importantly how do we measure progress when we want to train model functions that fulfill multiple criteria.

Speakers
avatar for Sara Hooker, Google

Sara Hooker, Google

Artificial Intelligence Resident, Google
Sara Hooker is Artificial Intelligence Resident at Google Brain doing deep learning research on model compression and reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, model compression and security. In... Read More →


Thursday January 24, 2019 4:40pm - 5:00pm
Grand Ballroom Hyatt Regency San Francisco, 5 Embarcadero Center, San Francisco, CA 94111, USA