Some Mathematical Tools for Machine Learning
These are lectures on some fundamental mathematics underlying many approaches and algorithms in machine learning. They are not about particular learning algorithms; they are about the basic concepts and tools upon which such algorithms are built. Often students feel intimidated by such material: there is a vast amount of "classical mathematics", and it can be hard to find the wood for the trees. The main topics of these lectures are Lagrange multipliers, functional analysis, some notes on matrix analysis, and convex optimization. I've concentrated on things that are often not dwelt on in typical CS coursework. Lots of examples are given; if it's green, it's a puzzle for the student to think about. These lectures are far from complete: perhaps the most significant omissions are probability theory, statistics for learning, information theory, and graph theory. I hope eventually to turn all this into a series of short tutorials. Please let me know of any errors, etc.
from Chris Burges homepage: http://research.microsoft.com/~cburges
- One equality constraint
- One equality constraint, cont.
- Multiple equality constraints
- One inequality constraiOne inequality constraint
- Multiple inequality constraints
- A simple A simple example
- Another simple example
- Simple exerciseSimple exercises
- Resource allocation
- A variational probleA variational problem
- Which univariate distribution has max entropy?
- Which univariate distribution has maWhich univariate distribution has max entropy?
- Max Entropy for Discrete Distribn + Linear Constraints
- Basic Concepts in
- What is a FielWhat is a Field?
- Field : Examples
- How Many Fields Are There?
- What is a Vector Space?
- Vector Spaces: Field MattersVector Spaces: Field Matters!
- Vector Spaces: More Examples
- What is an Inner ProduWhat is an Inner Product?
- Inner Product: Examples
- Inner Product: TracInner Product: Trace
- Inner Product is General
Author: Chris Burges, Microsoft Research