Calvin B. Shaw and Phaneendra K. Yalavarthy, "Performance evaluation of typical approximation algorithms for nonconvex ℓ_{p}-minimization in diffuse optical tomography," J. Opt. Soc. Am. A 31, 852-862 (2014)

The sparse estimation methods that utilize the ${\ell}_{p}$-norm, with $p$ being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These ${\ell}_{p}$-norm-based regularizations make the optimization function nonconvex, and algorithms that implement ${\ell}_{p}$-norm minimization utilize approximations to the original ${\ell}_{p}$-norm function. In this work, three such typical methods for implementing the ${\ell}_{p}$-norm were considered, namely, iteratively reweighted ${\ell}_{1}$-minimization (IRL1), iteratively reweighted least squares (IRLS), and the iteratively thresholding method (ITM). These methods were deployed for performing diffuse optical tomographic image reconstruction, and a systematic comparison with the help of three numerical and gelatin phantom cases was executed. The results indicate that these three methods in the implementation of ${\ell}_{p}$-minimization yields similar results, with IRL1 fairing marginally in cases considered here in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images.

You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

You do not have subscription access to this journal. Article level metrics are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

6. STOP if $k={N}_{it}$ or ${\Vert {r}_{(k+1)}-{r}_{(k)}\Vert}^{2}\le Tol$; otherwise set $k=k+1$, Go TO Step 2

Algorithm 2.

Iteratively Reweighted Least Squares Minimization Algorithm

INPUT: $\mathbf{J}$, $\delta $, $p$, $\lambda $, $Tol$, ${N}_{it}$ and sequence of positive numbers ${\u03f5}_{k}$ such that ${lim}_{k\to \infty}{\u03f5}_{k}=0$.

OUTPUT: $\mathrm{\Delta}{\mu}_{a}$

Initialize: $k=1$, $\mathrm{\Delta}{\mu}_{a}^{(1)}={\mathbf{J}}^{T}\delta $ and ${r}_{1}=(\mathbf{J}\mathrm{\Delta}{\mu}_{a}^{(1)}-\delta )$

1. Solve the following weighted-normal equation: $$\{{\mathbf{J}}^{\mathbf{T}}\mathbf{J}+diag\left(\frac{\lambda *p}{{({\u03f5}_{k}+{(\mathrm{\Delta}{\mu}_{a}{(i)}^{(k)})}^{2})}^{1-\frac{p}{2}}}\right)\}\mathrm{\Delta}{\mu}_{a}={\mathbf{J}}^{\mathbf{T}}\delta $$

2. Update $\mathrm{\Delta}{\mu}_{a}^{(k)}$ with $\mathrm{\Delta}{\mu}_{a}$

3. STOP if $k={N}_{it}$ or ${\Vert {r}_{(k+1)}-{r}_{(k)}\Vert}^{2}\le Tol$; otherwise set $k=k+1$, Go TO Step 1

The standard deviations observed among the reconstructed ${\mu}_{a}$ distributions as in Fig. 1 using 10 realizations of the noisy data (with fixed noise levels of 1% and 5%) are given in the first and second rows.

Table 2.

Mean Reconstructed ${\mu}_{a}$ in the Region of Interest [Target(s)] for the Results Presented in This Work^{a}

The standard deviations observed among the reconstructed ${\mu}_{a}$ distributions as in Fig. 1 using 10 realizations of the noisy data (with fixed noise levels of 1% and 5%) are given in the first and second rows.

Tables (6)

Algorithm 1.

Iteratively Reweighted ${\ell}_{1}$ Minimization Algorithm Using Weighted SALSA

INPUT: $\mathbf{J}$, $\delta $, $p$, $\lambda $, $\alpha $, Tol, ${N}_{it}$ and sequence of positive numbers ${\u03f5}_{k}$ such that ${lim}_{k\to \infty}{\u03f5}_{k}=0$.

OUTPUT: $\mathrm{\Delta}{\mu}_{a}$

Initialize: $d=0$ and $k=1$

1. $\mathrm{\Delta}{\mu}_{a}={\mathbf{J}}^{T}\delta $, ${r}_{1}=(\mathbf{J}\mathrm{\Delta}{\mu}_{a}-\delta )$ and $\mathbf{F}={\mathbf{J}}^{\mathbf{T}}\mathbf{J}+\alpha \mathbf{I}$

6. STOP if $k={N}_{it}$ or ${\Vert {r}_{(k+1)}-{r}_{(k)}\Vert}^{2}\le Tol$; otherwise set $k=k+1$, Go TO Step 2

Algorithm 2.

Iteratively Reweighted Least Squares Minimization Algorithm

INPUT: $\mathbf{J}$, $\delta $, $p$, $\lambda $, $Tol$, ${N}_{it}$ and sequence of positive numbers ${\u03f5}_{k}$ such that ${lim}_{k\to \infty}{\u03f5}_{k}=0$.

OUTPUT: $\mathrm{\Delta}{\mu}_{a}$

Initialize: $k=1$, $\mathrm{\Delta}{\mu}_{a}^{(1)}={\mathbf{J}}^{T}\delta $ and ${r}_{1}=(\mathbf{J}\mathrm{\Delta}{\mu}_{a}^{(1)}-\delta )$

1. Solve the following weighted-normal equation: $$\{{\mathbf{J}}^{\mathbf{T}}\mathbf{J}+diag\left(\frac{\lambda *p}{{({\u03f5}_{k}+{(\mathrm{\Delta}{\mu}_{a}{(i)}^{(k)})}^{2})}^{1-\frac{p}{2}}}\right)\}\mathrm{\Delta}{\mu}_{a}={\mathbf{J}}^{\mathbf{T}}\delta $$

2. Update $\mathrm{\Delta}{\mu}_{a}^{(k)}$ with $\mathrm{\Delta}{\mu}_{a}$

3. STOP if $k={N}_{it}$ or ${\Vert {r}_{(k+1)}-{r}_{(k)}\Vert}^{2}\le Tol$; otherwise set $k=k+1$, Go TO Step 1

The standard deviations observed among the reconstructed ${\mu}_{a}$ distributions as in Fig. 1 using 10 realizations of the noisy data (with fixed noise levels of 1% and 5%) are given in the first and second rows.

Table 2.

Mean Reconstructed ${\mu}_{a}$ in the Region of Interest [Target(s)] for the Results Presented in This Work^{a}

The standard deviations observed among the reconstructed ${\mu}_{a}$ distributions as in Fig. 1 using 10 realizations of the noisy data (with fixed noise levels of 1% and 5%) are given in the first and second rows.