Numerical Analysis with Kernels This talk is scheduled to fill a gap in the list of colloquium lectures, and it is intended as advanced teaching for our PhD students. I shall review why kernel methods are useful in Numerical Analysis and how they can be used. The starting point is that kernels arise whenever Numerical Analysis works in spaces of functions that allow continuous point evaluations. Using trial functions based on kernels will be proven to be a strategy that is optimal in at least three different aspects and consequently leads to numerical methods with certain optimality properties. Standard applications are Numerical Integration, Numerical Differentiation, Interpolation, and Approximation of functions. In each of the cases there are optimality results for kernel-based methods, depending on the function space selected and on the available input data. Connections to other research areas of the institute, e.g. Optimization, Regularization, Pattern Recognition, Integral Equations, or PDE solving, will be pointed out at the appropriate places, but not to the necessary detail, and only if time permits. Numerical examples will be provided for illustration.