September 9,
2016, 2PM 302- 309
Manifold-valued data naturally occur in many disciplines.
For example, directional data can be represented as points on a unit sphere.
Diffusion tensors in magnetic resonance images form a quotient manifold GL(n)/O(n), which is a space of symmetric positive
definite (SPD) matrices. Also, the Hilbert unit sphere for the square-root representation of orientation distribution functions
(ODFs) or probability density functions (PDFs). Their data spaces are known,
a priori, to have a nice mathematical structure with well-studied properties.
It makes sense that if algorithms make use of this additional information,
even more efficient inference procedures can be developed. Motivated by this
intuition, in this talk we study the relationship between statistical learning
algorithms and the geometric structures of data spaces encountered in machine
learning, computer vision and neuroimaging using mathematical tools (e.g.
Riemannian geometry). As a result, this framework will give new insigh ts into statistical
inference methods for image analysis and enable developing new models for
manifold-valued data and (potentially manifold-valued parameters) to improve
statistical power. Topics featured in this talk include: manifold statistics
on Riemannian manifolds, manifold-valued multivariate general linear models (mMGLMs), canonical correlation analysis (CCA) on
manifolds, Dirichlet Process for manifold-value
variables (DP-mMGLMs), interpolations of k-GMMs.
Lie Group. |