Human Pose, Shape and Action
3D Pose from Images
2D Pose from Images
Beyond Motion Capture
Action and Behavior
Body Perception
Body Applications
Pose and Motion Priors
Clothing Models (2011-2015)
Reflectance Filtering
Learning on Manifolds
Markerless Animal Motion Capture
Multi-Camera Capture
2D Pose from Optical Flow
Body Perception
Neural Prosthetics and Decoding
Part-based Body Models
Intrinsic Depth
Lie Bodies
Layers, Time and Segmentation
Understanding Action Recognition (JHMDB)
Intrinsic Video
Intrinsic Images
Action Recognition with Tracking
Neural Control of Grasping
Flowing Puppets
Faces
Deformable Structures
Model-based Anthropometry
Modeling 3D Human Breathing
Optical flow in the LGN
FlowCap
Smooth Loops from Unconstrained Video
PCA Flow
Efficient and Scalable Inference
Motion Blur in Layers
Facade Segmentation
Smooth Metric Learning
Robust PCA
3D Recognition
Object Detection
Probabilistic Methods for Linear Algebra

Linear algebra methods form the basis for the majority of numerical computations. Because of this foundational, ``inner-loop'' role, they have to satisfy strict requirements on computational efficiency and numerical robustness.
Our work has added to a growing understanding that many widely used linear solvers can be interpreted as performing probabilistic inference on the elements of a matrix or a vector from observations linear projections of this latent object. In particular, this is true for such foundational algorithms as the method of conjugate gradients and other iterative algorithms in the conjugate directions and projection method classes [].
Our ongoing research effort focusses on ways to use these insights in the large-scale linear algebra problems encountered in machine learning. There, the most frequent linear algebra task is the least-squares problem of solving
$$ Kx = b $$
where $K$ is a very large symmetric positive definite matrix (e.g. a kernel Gram matrix, or the Hessian of a deep network loss function). A key challenge in the big data regime is that the matrix --- defined as a function of a large data-set --- can only be evaluated with strong stochastic noise caused by data sub-sampling. Classic iterative solvers, particularly those based on the Lanczos process, like conjugate gradients, are known to be unstable to such stochastic disturbances, which is part of the reason why second-order methods are not popular in deep learning. In recent work we have developed and tested extensions to classic solvers that remain stable [] and tractable in this setting by efficiently re-using information across many interations [
].
Members
Publications