# Multitask learning: the Bayesian way

Multi-task learning lends itself particularly well to a Bayesian approach. Cross-inference between tasks can be implemented by sharing parameters in the likelihood model and the prior for the task-specific model parameters. Choosing different priors, one can implement task clustering and task gating. Throughout my presentation, predicting single-copy newspaper sales will serve as a running example.

*Author: Tom Heskes, Radboud University Nijmegen*