Autonomous systems require efficient learning mechanisms that are fully integrated with the control loop. We need robust learning methods to meet requirements such as the safety and stability of the overall control system. We need adaptive and sample-efficient learning methods to quickly track changes in the environment or tasks. Standard learning methods do not meet these requirements and we propose new guaranteed learning frameworks. We first consider the case of a linear quadratic Gaussian (LQG) system with unknown parameters. We design a reinforcement learning (RL) method that combines episodic and online learning. We prove a surprising logarithmic upper bound on the regret. Thus, we derive a sample-efficient RL method in this challenging case of a partially observable control system. We then consider the case of unknown nonlinear dynamics. We develop robust regression methods for safe exploration and combine this with chance-constrained trajectory optimization to compute safe roll-outs, thereby ensuring the consistency of our learning method. Thus, we design efficient learning methods for control systems with built-in safety/stability guarantees. Bio: Anima Anandkumar is a Bren Professor at Caltech and Director of ML Research at NVIDIA. She was previously a Principal Scientist at Amazon Web Services. She has received several honors such as Alfred. P. Sloan Fellowship, NSF Career Award, Young investigator awards from DoD, and Faculty Fellowships from Microsoft, Google, and Adobe. She is part of the World Economic Forum’s Expert Network. She is passionate about designing principled AI algorithms and applying them in interdisciplinary applications. Her research focus is on unsupervised AI, optimization, and tensor methods.