Hostname: page-component-586b7cd67f-rcrh6 Total loading time: 0 Render date: 2024-11-28T01:39:58.696Z Has data issue: false hasContentIssue false

Unbiased group-wise alignment by iterative central tendency estimations

Published online by Cambridge University Press:  24 December 2008

M. S. De Craene*
Affiliation:
Center for Computational Imaging & Simulation Technologies in Biomedicine (CISTIB), Networking Biomedical Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Information & Communications Technologies Department, Universitat Pompeu Fabra, Barcelona, Spain
B. Macq
Affiliation:
Communications and Remote Sensing Laboratory, Université catholique de Louvain, Belgium
F. Marques
Affiliation:
Image and Video Processing Group, Technical University of Catalonia, Barcelona, Spain
P. Salembier
Affiliation:
Image and Video Processing Group, Technical University of Catalonia, Barcelona, Spain
S. K. Warfield
Affiliation:
Computational Radiology Laboratory, Harvard Medical School, Departments of Radiology, Children's Hospital, Boston, USA
Get access

Abstract

This paper introduces a new approach for the joint alignment of a largecollection of segmented images into the same system of coordinates whileestimating at the same time an optimal common coordinate system. The atlasresulting from our group-wise alignment algorithm is obtained as the hiddenvariable of an Expectation-Maximization (EM) estimation. This is achievedby identifying the most consistent label across the collection of images at eachvoxel in the common frame of coordinates.
In an iterative process, each subject is iteratively aligned with the currentprobabilistic atlas until convergence of the estimated atlas is reached. Twodifferent transformation models are successively applied in the alignmentprocess: an affine transformation model and a dense non-rigiddeformation field. The metric for both transformation models is the mutualinformation that is computed between the probabilistic atlas and each subject.This metric is optimized in the affine alignment step using a gradient basedstochastic optimization (SPSA) and with a variational approach to estimate thenon-rigid atlas to subject transformations.
A first advantage of our method is that the computational cost increaseslinearly with the number of subjects in the database. This method is thereforeparticularly suited for a large number of subjects. Another advantage is that,when computing the common coordinate system, the estimation algorithm identifiesweights for each subject on the basis of the typicality of the segmentation.This makes the common coordinate system robust to outliers in the population.
Several experiments are presented in this paper to validate our atlasconstruction method on a population of 80 brain images segmented into 4 labels(background, white and gray matters and ventricles). First, the 80 subjects werealigned using affine and dense non-rigid deformation models. The results arevisually assessed by examining how the population converges closer toa central tendency when the deformation model allows more degrees of freedom(from affine to dense non-rigid field). Second, the stability of the atlasconstruction procedure for various sizes of population was investigated bystarting from a subset of the total population which was incrementallyaugmented until the total population of 80 subjects was reached. Third, theconsistency of our group-wise reference (hidden variable of the EM algorithm)was also compared to the choice of an arbitrary subject for a subset of 10subjects. According to William's index, our reference choiceperformed favorably. Finally, the performance of our algorithm was quantified ona synthetic population of 10 subjects (generated using random B-Splinetransformations) using a global overlap measure for each label. We also measuredthe robustness of this measure to the introduction of noisy subjects in thepopulation.

Type
Research Article
Copyright
© EDP Sciences, 2008

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)