Empirical Inference Conference Paper 2009

Learning Similarity Measure for Multi-Modal 3D Image Registration

Multi-modal image registration is a challenging problem in medical imaging. The goal is to align anatomically identical structures; however, their appearance in images acquired with different imaging devices, such as CT or MR, may be very different. Registration algorithms generally deform one image, the floating image, such that it matches with a second, the reference image, by maximizing some similarity score between the deformed and the reference image. Instead of using a universal, but a priori fixed similarity criterion such as mutual information, we propose learning a similarity measure in a discriminative manner such that the reference and correctly deformed floating images receive high similarity scores. To this end, we develop an algorithm derived from max-margin structured output learning, and employ the learned similarity measure within a standard rigid registration algorithm. Compared to other approaches, our method adapts to the specific registration problem at hand and exploits correlations between neighboring pixels in the reference and the floating image. Empirical evaluation on CT-MR/PET-MR rigid registration tasks demonstrates that our approach yields robust performance and outperforms the state of the art methods for multi-modal medical image registration.

Author(s): Lee, D. and Hofmann, M. and Steinke, F. and Altun, Y. and Cahill, ND. and Schölkopf, B.
Book Title: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Pages: 186-193
Year: 2009
Month: June
Day: 0
Publisher: IEEE Service Center
Bibtex Type: Conference Paper (inproceedings)
Address: Piscataway, NJ, USA
DOI: 10.1109/CVPRW.2009.5206840
Event Name: CVPR 2009
Event Place: Miami, FL, USA
Electronic Archiving: grant_archive
Language: en
Organization: Max-Planck-Gesellschaft
School: Biologische Kybernetik
Links:

BibTex

@inproceedings{5777,
  title = {Learning Similarity Measure for Multi-Modal 3D Image Registration},
  booktitle = {Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition},
  abstract = {Multi-modal image registration is a challenging problem
  in medical imaging. The goal is to align anatomically
  identical structures; however, their appearance in images
  acquired with different imaging devices, such as CT
  or MR, may be very different. Registration algorithms generally
  deform one image, the floating image, such that it
  matches with a second, the reference image, by maximizing
  some similarity score between the deformed and the reference
  image. Instead of using a universal, but a priori fixed
  similarity criterion such as mutual information, we propose
  learning a similarity measure in a discriminative manner
  such that the reference and correctly deformed floating
  images receive high similarity scores. To this end, we
  develop an algorithm derived from max-margin structured
  output learning, and employ the learned similarity measure
  within a standard rigid registration algorithm. Compared
  to other approaches, our method adapts to the specific registration
  problem at hand and exploits correlations between
  neighboring pixels in the reference and the floating image.
  Empirical evaluation on CT-MR/PET-MR rigid registration
  tasks demonstrates that our approach yields robust performance
  and outperforms the state of the art methods for
  multi-modal medical image registration.},
  pages = {186-193},
  publisher = {IEEE Service Center},
  organization = {Max-Planck-Gesellschaft},
  school = {Biologische Kybernetik},
  address = {Piscataway, NJ, USA},
  month = jun,
  year = {2009},
  slug = {5777},
  author = {Lee, D. and Hofmann, M. and Steinke, F. and Altun, Y. and Cahill, ND. and Sch{\"o}lkopf, B.},
  month_numeric = {6}
}