This course is taught as a regular Ph.D. course in the DTU course 029?? "Advanced Topics in Image Analysis, Computer Graphics and Geoinformatics". The First lecture will be given on
Monday February 28th, at 1pm- 3pm.
On this date and time the participants will decide the schedule for the remainder of the course.
Deformable template modelling is an important part of understanding complex patterns in images. Ulf Grenanders seminal work on 2D deformable template modelling (Grenander, Chow & Keenan, 1991) was hugely popularised in the end of the nineteen-nineties by the work of Cootes and Taylor (Cootes, Taylor, Cooper & Graham, 1995; Cootes, Edwards & Taylor, 2001; Stegmann, Ersbøll & Larsen, 2003), where they formulated linear models for shape variability estimated from annotated training data. Ramsay and Silverman (Ramsay & Silverman, 1997) presented seminal work of functional representations of curves.
However, in many situations linear models are likely to fail in accurately modelling natural phenomena. For instance Kendall's shape space for triangles in a plane - the simplest shape imaginable - is a sphere in R3. Linear approximations to the triangle shape space are only valid for low variability in triangle shape. To handle non-linear, large scale variations as occurs in nature new models are required. Developments in the statistics and machine learning fields in the new millennium have led to methods for parameterizing low dimensional manifolds in high dimensional spaces. These developments form the basis of formulating non-linear shape space models. Such as principal curves proposed by Hastie and Stuetzle (Hastie & Stuetzle, 1989); Tenenbaum et al. (Tenenbaum, Silva & Langford, 2000)'s ISOMAP procedure; Roweis et al. (Roweis & Saul 2000)'s Local Linear Embedding (LLE); Belkin and Niyogi (Belkin & Niyogi, 2002)'s Laplacian Eigenmaps; and Donoho and Grimes (Donoho & Grimes, 2003)'s Hessian Eigenmaps.
Furthermore, natural phenomena can often be explained by a set of few underlying parameters. This property has been used in many years in statistics, e.g. in factor rotation (Harman, 1967) for easier interpretation. In recent years sparsity has been used as design criterion to overcome the problem of the dimensionality of measurements vastly exceeding the number of observations available. Mathematically sparsity is envoked by putting a L0 penalty of the parameters. However, this is computationally intractable. Fortunately, in many situation the L1 penalty - for which computationally feasible solution are available - can work as a proxy for the L0 penalty as is used for instance in LASSO and LARS regression (Tibshirani 1996; Efron, Johnstone, Hastie & Tibshirani, 2003).
In this course we will read and discuss a series of articles and book chapters concerning the above mentioned subjects.
Course evaluation is based on completion of mandatory exercises using Matlab and S-plus software available at the following internet addresses