Saarland University, 2009. — 134 p.
Machine learning requires the use of prior assumptions which can be encoded into learning algorithms via regularisation techniques. In this thesis, we examine in three examples how suitable regularisation criteria can be formulated, what their meaning is, and how they lead to efficient machine learning algorithms. Firstly, we describe a joint framework for positive definite kernels, Gaussian processes, and regularisation operators which are commonly used objects in machine learning. With this in mind, it is then straightforward to see that linear differential equations are an important special case of regularisation operators. The novelty of our description is the broad, unifying view connecting kernel methods and linear system identification. We then discuss Bayesian inference and experimental design for sparse linear models. The model is applied to the task of gene regulatory network reconstruction, where the assumed network sparsity improves reconstruction accuracy and our proposed experimental design setup outperforms prior methods significantly. Finally, we examine non-parametric regression between Riemannian manifolds, a topic that has received little attention so far. We propose a regularised empirical risk minimisation framework, ensuring with the help of differential geometry that it does not depend on the representation of the input and output manifold. We apply our approach to several practical learning tasks in robotics and computer graphics.