Machine Learning for Tomographic Imaging. Professor Ge Wang

Читать онлайн.
Название Machine Learning for Tomographic Imaging
Автор произведения Professor Ge Wang
Жанр Медицина
Серия
Издательство Медицина
Год выпуска 0
isbn 9780750322164



Скачать книгу

the ith x-ray path and aij denotes the contribution of the jth pixel to the ith line integral bi. Then, we have a discretization version of Beer’s law

      Ii=I0exp(−bi)=I0exp(−[Ax]i),(2.15)

      where A={aij∣i=1,2,…,I,j=1,2,…,J} and [Ax]i denotes the ith element of Ax.

      With a CT scanner, an original signal contains an inherent quantum noise. Approximately, the measured data follow the Poisson distribution

      Ii∼Poisson(I¯i),(2.16)

      where Ii denotes a specific measurement of the Poisson noise compromised line integral.

      Since the measured data along different paths are statistically independent from each other, according to the Poisson model, the joint probability distribution of the measured data can be written as follows,

      P(I∣x)=∏i=1II¯iIiexp(−I¯i)Ii!,(2.17)

      and the corresponding likelihood function is expressed as the following:

      L(I∣x)=log(P(I∣x))=∑i=1IIilogI¯i−I¯i−logIi!(2.18)

      or

      L(I∣x)=∑i=1IIilogI0e−[Ax]i−I0e−[Ax]i−logIi!.(2.19)

      Ignoring the constants, we rewrite the above formula as

      L(I∣x)=−∑i=1IIi[Ax]i+I0e−[Ax]i.(2.20)

      Statistically, it makes a perfect sense to maximize the posterior probability:

      P(x∣I)=P(I∣x)P(x)/P(I).(2.21)

      Considering the monotone increasing property of the natural log operation, the image reconstruction process is equivalent to maximizing the following objective function:

      In equation (2.22), log(P(x)) denotes the prior information and can be conveniently expressed by changing −log(P(x)) to βψ(x). Then, the image reconstruction problem can be expressed as the following optimization problem:

      Φ(x)=−L(Iˆ∣x)−log(P(x))=∑i=1IIi[Ax]i+I0e−[Ax]+βψ(x).(2.23)

      It is noted that ψ(x) is the regularizer and β is used as a regularization parameter. Because the objective function is not easy for numerical optimization, we transform the first term into its second-order Taylor expansion around the measurement-based estimate of the line integral bi=lnI0/Ii and obtain a simplified objective function

      where wi=Ii is also known as the statistical weight. For the deduction, see Elbakri and Fessler (2002).

      This objective function has two terms, for data fidelity and image regularization, respectively. In the fidelity term, the statistical weight modifies the discrepancy between the measured and estimated line integrals along each x-ray path. Heuristically, there would be fewer photons along a more attenuating path, resulting in a greater uncertainty in estimating the line integral along the path, thus having a smaller weight assigned to the line integral along that path. In contrast, a less attenuating path will be given a larger weight. With the regularization term, a statistical model can be enforced to guide the image reconstruction process so that a reconstructed image will look more like what we expect based on a statistical model of images.

      As we discussed in the first chapter, a reconstructed CT image suffers from degraded image quality in the case of low-dose scanning. How to maintain or improve the diagnostic performance is the key issue associated with low-dose CT. Inspired by compressive sensing theory, the sparse constraint in terms of dictionary learning is developed as an effective way for a sparse representation. Recently, a dictionary learning-based approach for low-dose x-ray CT was proposed by Qiong Xu et al (2012), in which a redundant dictionary is incorporated into the statistical reconstruction framework. The dictionary can be either predetermined before image reconstruction or adaptively defined during an image reconstruction process. We will describe both the modes for dictionary learning-based low-dose CT in the following.

      2.3.2.1 Methodology

      Recall the objective function for statistical reconstruction equation (2.24), the regularization term ψ(x) represents prior information on reconstructed images. By utilizing the sparsity constraint in terms of a learned dictionary as the regularizer, the objective function can be rewritten as

image

      where Es=enjs∈RN×J represents an operator to extract an image patch from an image, and bi=lnI0Ii denotes a line integral. It is worth mentioning that they proposed two strategies for dictionary learning: a global dictionary learned before a statistical iterative reconstruction (GDSIR) and an adaptive dictionary learned in the statistical iterative reconstruction (ADSIR). The former uses a predetermined dictionary to sparsely represent an image while the later learns a dictionary based on intermediate images obtained during the iterative reconstruction process and only uses the dictionary for sparse coding each intermediate image. In this subsection, let us look at the two update processes for GDSIR and ADSIR, respectively.

      For GDSIR, a redundant dictionary was trained for a chest region in the baseline image, as shown in figure 2.5. Then, the image reconstruction process is equivalent to solving the following optimization problem, which contains two variables x and α:

image

      An alternate minimization is performed with respect to x and α alternately. First, the sparse expression α˜ is fixed, and the current image is updated. At this point, the objective function becomes

      minx∑i=1Iwi2([Ax]i−bi)2+λ∑sEsx−Dα˜s22.(2.27)

image

      Figure 2.5. Global dictionary learned using the online dictionary learning method (Mairal et al (2009). Reprinted with permission from Xu et al 2012). Copyright 2012 IEEE.

      Using the separable paraboloid alternative method (Elbakri and Fessler 2002), the objective function is optimized iteratively:

      where t=1,2,…,T indexes the iterations.

      After obtaining an intermediate image xt, it is re-coded for a sparse representation. The objective function is changed to

image

      This equation represents a sparse coding problem, which can be solved using the OMP method described in subsection 2.2.1.

      For ADSIR, the image reconstruction process is equivalent to solving the following optimization problem:

image

      In ADSIR, the dictionary D