Loading…
Friday, January 25 • 1:00pm - 1:30pm
Understanding the limitations of AI: When Algorithms Fail

Log in to save this to your schedule and see who's attending!

Feedback form is now closed.
Automated decision making tools are currently used in high stakes scenarios. From natural language processing tools used to automatically determine one’s suitability for a job, to health diagnostic systems trained to determine a patient’s outcome, machine learning models are used to make decisions that can have serious consequences on people’s lives. In spite of the consequential nature of these use cases, vendors of such models are not required to perform specific tests showing the suitability of their models for a given task. Nor are they required to provide documentation describing the characteristics of their models, or disclose the results of algorithmic audits to ensure that certain groups are not unfairly treated. I will show some examples to examine the dire consequences of basing decisions entirely on machine learning based systems, and discuss recent work on auditing and exposing the gender and skin tone bias found in commercial gender classification systems. I will end with the concept of an AI datasheet to standardize information for datasets and pre-trained models, in order to push the field as a whole towards transparency and accountability.

Speakers
avatar for Timnit Gebru, Ethical AI Team at Google

Timnit Gebru, Ethical AI Team at Google

Research Scientist, Ethical AI Team,Google
Timnit Gebru is a research scientist in the Ethical AI team at Google and just finished her postdoc in the Fairness Accountability Transparency and Ethics (FATE) group at Microsoft Research, New York. Prior to that, she was a PhD student in the Stanford Artificial Intelligence Laboratory... Read More →


Friday January 25, 2019 1:00pm - 1:30pm
Pacific I-K Hyatt Regency San Francisco, 5 Embarcadero Center, San Francisco, CA 94111, USA