Log in to bookmark your favorites and sync them to your phone or calendar.

Deep Learning Stage [clear filter]
Friday, January 25


Deep Robotic Learning
Deep learning has been demonstrated to achieve excellent results in a range of passive perception tasks, from recognizing objects in images to recognizing human speech. However, extending the success of deep learning into domains that involve active decision making has proven challenging, because the physical world presents an entire new dimension of complexity to the machine learning problem. Machines that act intelligently in open-world environments must reason about temporal relationships, cause and effect, and the consequences of their actions, and must adapt quickly, follow human instruction, and remain safe and robust. Although the basic mathematical building blocks for such systems -- reinforcement learning and optimal control -- have been studied for decades, such techniques have been difficult to extend to real-world control settings. For example, although reinforcement learning methods have been demonstrated extensively in settings such as games, their applicability to real-world environments requires new and fundamental innovations: not only does the sample complexity of such methods need to be reduced by orders of magnitude, but we must also study generalization, stability, and robustness. In this talk, I will discuss how deep learning and reinforcement learning methods can be extended to enable real-world robotic control, with an emphasis on techniques that generalize to situations, objects, and tasks. I will discuss how model-based reinforcement learning can enable sample-efficient control, how model-free reinforcement learning can be made efficient, robust, and reliable, and how meta-learning can enable robotic systems to adapt quickly to new tasks and new situations.

avatar for Sergey Levine, UC Berkeley

Sergey Levine, UC Berkeley

Assistant Professor, UC Berkeley
Sergey Levine received a BS and MS in Computer Science from Stanford University in 2009, and a Ph.D. in Computer Science from Stanford University in 2014. He joined the faculty of the Department of Electrical Engineering and Computer Sciences at UC Berkeley in fall 2016. His work... Read More →

Friday January 25, 2019 9:55am - 10:15am
Grand Ballroom Hyatt Regency San Francisco, 5 Embarcadero Center, San Francisco, CA 94111, USA


Meta-Learning Deep Networks
Deep learning has enabled significant advances in a variety of domains; however, it relies heavily on large labeled datasets. I will discuss how we can use meta-learning, or learning to learn, to enable us to adapt deep models to new tasks with tiny amounts of data, by leveraging data from other tasks. By using these meta-learning techniques, I will show how we can build better unsupervised learning algorithms, build agents that can adapt online to changing environments, and build robots that can interact with a new object by watching a single demonstration.

avatar for Chelsea Finn, Google Brain & Berkeley AI Research

Chelsea Finn, Google Brain & Berkeley AI Research

Research Scientist & Post-doctoral Scholar, Google Brain & Berkeley AI Research
Chelsea Finn is a research scientist at Google Brain and post-doctoral scholar at Berkeley AI Research. Starting in 2019, she will join the faculty in CS at Stanford University. She is interested in how learning algorithms can enable machines to acquire general notions of intelligence... Read More →

Friday January 25, 2019 11:05am - 11:30am
Grand Ballroom Hyatt Regency San Francisco, 5 Embarcadero Center, San Francisco, CA 94111, USA


Latent Structure in Deep Robotic Learning
Traditionally, deep reinforcement learning has focused on learning one particular skill in isolation and from scratch. This often leads to repeated efforts of learning the right representation for each skill individually, while it is likely that such representation could be shared between different skills. In contrast, there is some evidence that humans reuse previously learned skills efficiently to learn new ones, e.g. by sequencing or interpolating between them.
In this talk, I will demonstrate how one could discover latent structure when learning multiple skills concurrently. In particular, I will present a first step towards learning robot skill embeddings that enable reusing previously acquired skills. I will show how one can use these ideas for multi-task reinforcement learning, sim-to-real transfer and imitation learning.

avatar for Karol Hausman, Google Brain

Karol Hausman, Google Brain

Research Scientist, Google Brain
Karol Hausman is a Research Scientist at Google Brain in Mountain View, California working on robotics and machine learning. He is interested in enabling robots to autonomously acquire general-purpose skills with minimal supervision in real-world environments. His current research... Read More →

Friday January 25, 2019 11:30am - 11:50am
Grand Ballroom Hyatt Regency San Francisco, 5 Embarcadero Center, San Francisco, CA 94111, USA