Bayesian models of human inductive learning

Posted in Science on September 12, 2008

Bayesian models of human inductive learning

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations -- far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people's everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called "intuitive theories" or "schemas". For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured and used, and how these representations could themselves be learned via Bayesian methods. The key challenge is to balance the need for strongly constrained inductive biases -- critical for generalization from very few examples -- with the flexibility to learn about the structure of new domains, to learn new inductive biases suitable for environments which we could not have been pre-programmed to perform in. The models I discuss will connect to several directions in contemporary machine learning, such as semi-supervised learning, structure learning in graphical models, hierarchical Bayesian modeling, and nonparametric Bayes.

Author: Josh Tenenbaum, Mit Massachusetts Institute Of Technology

Watch Video

Tags: Science, Lectures, Computer Science, Machine Learning, VideoLectures.Net, Bayesian Learning, Psychology