Approximate Marginalization of Absorption and Scattering in Fluorescence Diffuse Optical Tomography

In fluorescence diffuse optical tomography (fDOT), the reconstruction of the fluorophore concentration inside the target body is usually carried out using a normalized Born approximation model where the measured fluorescent emission data is scaled by measured excitation data. One of the benefits of the model is that it can tolerate inaccuracy in the absorption and scattering distributions that are used in the construction of the forward model to some extent. In this paper, we employ the recently proposed Bayesian approximation error approach to fDOT for compensating for the modeling errors caused by the inaccurately known optical properties of the target in combination with the normalized Born approximation model. The approach is evaluated using a simulated test case with different amount of error in the optical properties. The results show that the Bayesian approximation error approach improves the tolerance of fDOT imaging against modeling errors caused by inaccurately known absorption and scattering of the target.


(Communicated by Samuli Siltanen)
Abstract. In fluorescence diffuse optical tomography (fDOT), the reconstruction of the fluorophore concentration inside the target body is usually carried out using a normalized Born approximation model where the measured fluorescent emission data is scaled by measured excitation data. One of the benefits of the model is that it can tolerate inaccuracy in the absorption and scattering distributions that are used in the construction of the forward model to some extent. In this paper, we employ the recently proposed Bayesian approximation error approach to fDOT for compensating for the modeling errors caused by the inaccurately known optical properties of the target in combination with the normalized Born approximation model. The approach is evaluated using a simulated test case with different amount of error in the optical properties. The results show that the Bayesian approximation error approach improves the tolerance of fDOT imaging against modeling errors caused by inaccurately known absorption and scattering of the target.
1. Introduction. Fluorescence diffuse optical tomography (fDOT) is an emerging imaging technique aiming at recovering the distribution of fluorophore marker inside diffusive target medium from measurements of fluorescent emission at the surface of the body [6,35]. Typically, fluorescence agents bound to molecules or proteins are introduced to the bloodstream. The molecules then act as ligands as they attach themselves to the targeted receptor sites. The surface of the body is illuminated with light at the excitation wavelength of the fluorescence agent. The measurement system collects fluence from the body surface at the emission wavelength. The inverse problem associated with fDOT is the estimation of spatially distributed fluorophore marker concentration in the body.
Computationally, the inverse problem in fDOT amounts to estimating the spatially distributed fluorophore concentration from the model where h ∈ R n is the vector of unknowns, representing a pixel or voxel or other parametrization of the fluorophore concentration, y ∈ R m is the measurement vector, e ∈ R m models the measurement noise and A(µ a , µ s ) is matrix implementing the forward model (i.e., the light propagation model) corresponding to nominal absorption and scattering distributions µ a and µ s . The estimation of h from the model (1) is an ill-posed inverse problem, that is sensitive with respect to measurement and modeling errors. The fDOT inverse problem is most often carried out using the so-called normalized Born approximation model [38], where the measurement vector is the measured fluorescent emission data vector y f obs scaled by the measured excitation data y e obs as (2) y = y f obs y e obs , and forward matrix A is whereÃ is the forward matrix corresponding to the raw fluorescence measurement and y e calc is computed excitation data corresponding to the absorption and scattering distributions µ a and µ s [38,41]. The convenience of the Born normalization comes from the fact that it does not require a reference excitation measurement from homogeneous reference media. The normalization also effectively calibrates the problem with respect to source strength and individual gains and coupling coefficients of individual source detector pairs [38,43]. From the practical point of view, a further significant feature of the Born normalized model is that it can tolerate inaccurately known target absorption and scattering distributions (µ a , µ s ) to some extent.
The absorption and scattering distributions (µ a , µ s ) are often interpreted as known (nuisance) parameters in the fDOT problem. However, in practical experiments, the actual values of these parameters are usually not known accurately, and therefore one is bound to use approximate measurement model where µ a, * and µ s, * are our estimates for the absorption and scattering of the body. Obviously, the use of incorrect realizations (µ a, * , µ s, * ) in the measurement model induces unknown modeling error in the model, and since the inverse problem is ill-posed, this error may cause large artifacts to the reconstructed fluorophore image.
In this paper, we propose the reduction of the reconstruction errors caused by inaccurately known absorption and scattering of the body by the Bayesian approximation error approach [21,22]. We consider the approach using the Born normalized model (1)(2)(3), so that the starting point would be the model that has the best available tolerance for the inaccurately known absorption and scattering.
The key idea in the Bayesian approximation error model is to represent not just the measurement error, but also computational model inaccuracy as a random variable. Hence, instead of the approximate measurement model (4), we write an accurate measurement model, where the term ε = [A(µ a , µ s ) − A(µ a, * , µ s, * )]h represents the approximation error and ν = ε + e the total noise. Obviously, the realization of ε is unknown since it depends on the unknown h and the unknown nuisance parameters (µ a , µ s ). However, in the Bayesian inversion paradigm, we can compute an estimate for the statistics of the approximation error ε over the prior probability distributions of the unknowns and the nuisance parameters µ a and µ s . The approximation error statistics are then used in the inverse problem to compensate for the inaccurately known µ a and µ s .
The Bayesian approximation error approach was originally applied for discretization error in several different applications in Ref. [21]. For this reason, the term "approximation error" is commonly used also where "modeling error" might be a more appropriate term. The approach was verified with real EIT data in Ref. [33], where the approach was employed for the compensation of discretization errors and the errors caused by inaccurately known height of the air-liquid surface in an industrial mixing tank. The application of the Bayesian approximation error approach for the discretization errors and the truncation of the computational domain was studied in Ref. [27], and for the linearization error in Ref. [40]. In Ref. [34] the approach was evaluated for the compensation of errors caused by coarse discretization, domain truncation and unknown contact impedances with real EIT data. In addition to EIT, the Bayesian approximation error approach has also been applied to other inverse problems and other types of (modeling) errors: Model reduction, domain truncation and unknown anisotropy structures in optical diffusion tomography were treated in Refs. [2], [24], [14] and [15]. Missing boundary data in the case of image processing was considered in Ref. [3]. In Ref. [50], again related to optical tomography, an approximative physical model (diffusion model instead of the radiative transfer model) was used for the forward problem. In Ref. [25], an unknown nuisance distributed parameter (scattering coefficient) was treated with the Bayesian approximation error approach. The compensation of errors caused by unknown optode coupling coefficients and locations was considered in Ref. [31]. The compensation of errors caused by unknown domain shape and discretization error was considered in Ref. [32]. The extension and application of the modeling error approach to time-dependent inverse problems was considered in Refs. [16], [17] and [45].
In this work, we evaluate the Bayesian approximation error approach with simulated two dimensional (2D) and three dimensional (3D) examples. In the 2D example, we use different levels of severity of error in the nominal absorption and scattering distributions and show that the approach improves the tolerance of the fDOT problem against inaccurately known absorption and scattering of the body over the conventional reconstruction approach. In the 3D example, we test the approach using simulated data using the Digimouse atlas geometry [10,12].
The remainder of the paper is organized as follows. In Section 2, the Bayesian approximation error model for the compensation of modeling errors due to unknown absorption and scattering in fDOT is presented. In Section 3, we review the light transport model which we use for the forward model. The details of data simulation, meshing, constructing the prior models and estimation of the approximation error statistics are described in Section 4. The simulation results are presented in Section 5. Finally, the conclusions are given in Section 6.

Statistical inversion in general.
In the Bayesian approach to inverse problems, the principle is that all unknowns and measured quantities are considered as random variables and the uncertainty of their values are encoded into their probability distribution models [21,4]. The complete model of the inverse problem is the posterior density model, that is, the information and uncertainities in the unknown or interesting variables given the measurements, given by the Bayes' theorem (7) π(h, µ a , µ s , e|y) = π(y|h, µ a , µ s , e)π(h, µ a , µ s , e) π(y) , where π(y|h, µ a , µ s , e) is the likelihood density modeling the probability of different measurement realizations when the realizations of h, µ a , µ s and e are given. The density π(h, µ a , µ s , e) is the prior model and it models our information on the unknown parameters before the actual measurements. The posterior (7) is practically always marginalized with respect to the unknown but uninteresting measurement related errors e as (8) π(h, µ a , µ s |y) = π(h, µ a , µ s , e|y)de, for details in the case of the additive error model y = A(h) + e, see Refs. [25] and [20], for other types of errors see Ref. [21]. The posterior density π(h, µ a , µ s |y) is a probability density in a very high-dimensional space. Thus, in order to get practical estimates for the unknowns and visualize the solution, one needs to compute point estimate(s) from the posterior density, the most common choice in high dimensional cases being the maximum a posteriori (MAP) estimate. In principle, one could attempt to compute the MAP estimate for all the unknown model parameters See Refs. [49], [48] and [29] for simultaneous reconstruction of fluorescent and optical parameters. Also, in Ref. [7] a method to determine a modified optical parameter (related to µ a , µ s ) from measured excitation data and include it as apriori anatomical information into fDOT imaging is presented. Alternatively, one could treat the uncertainty in the values of nuisance parameters (µ a , µ s ) by marginalizing the posterior density as (10) π(h|y) = π(h, µ a , µ s |y)dµ a dµ s and then compute estimate for the primary unknowns from the posterior π(h|y). However, the solution of (10) would require Markov chain Monte Carlo integrations that would be computationally infeasible for practical purposes.
The key idea in the Bayesian approximation error approach is to find an approximationπ(h|y) for the posterior (10) such that the marginalization over the uncertainty in the values of (µ a , µ s ) is carried out approximately and in a computationally feasible way.
Before presenting the Bayesian approximation error approach for treating the uncertainty in the optical parameters (µ a , µ s ), we first review the standard fDOT reconstruction approach where (µ a , µ s ) = (µ a, * , µ s, * ) are treated as known and fixed variables.
However, in cases that the values (µ a, * , µ s, * ) are incorrect, the model (11) can be grossly misleading. Given the observation model (4) with fixed realizations (µ a , µ s ) = (µ a, * , µ s, * ), and modeling the measurement noise as normal e ∼ N (e * , Γ e ), where e * ∈ R m is the measurement noise mean and Γ e ∈ R m×m is the measurement noise covariance and marginalizing (11) over the unknown measurement errors e as π(h|y, µ a, * , µ s, * ) = π(h, e|y, µ a, * , µ s, * )de the posterior density becomes [25,20] In addition, if the random measurement noise has zero mean (e * = 0), the MAP estimate corresponding to the posterior (12) is obtained as We refer to the solution of (13) as the MAP estimate with the conventional error model (CEM) approach.
2.3. Bayesian approximation error model. In the Bayesian approximation error approach, we write the measurement model as Note that the model (15) is exact, see equation (1). Here the term ε = [A(µ a , µ s )− A(µ a, * , µ s, * )]h represents the approximation error and ν = ε + e the total error. Obviously, the realization of ε is unknown since it depends on the unknowns h and the unknown nuisance parameters (µ a , µ s ). However, in the Bayesian inversion paradigm we can compute an estimate of the statistics of ε using the prior probability distributions of the unknowns and the nuisance parameters and approximately marginalize over ε.
To include the uncertainty in the random variable ε into our computational posterior model, the core step in the Bayesian approximation error approach is approximate pre-marginalization of the joint distribution of the parameters (y, h, e, ε, µ a , µ s ) over the nuisance parameters (e, ε, µ a , µ s ). Following the approach in Refs. [25] and [20] and making a Gaussian approximation for the joint density π(h, ε), we obtain the approximate likelihood model For the posterior model, we write an approximationπ(h|y) ∝π(y|h)π(h), leading to MAP estimation problem (19) h aem = arg min We refer to the estimate (19) as the MAP with the Bayesian approximation error model (MAP-AEM).
In the following, we employ the enhanced error model [21,22], where we approximate the error ε and the unknown h as mutually uncorrelated, implying that the mean and covariance in equations (17)-(18) become In this work, the mean ε * and covariance Γ ε were computed by sampling based Monte Carlo integration (see Section 4.3).

2.4.
Estimates to be computed. To evaluate the Bayesian approximation error approach, we compute the following estimates In other words, in MAP-REF (µ a , µ s ) are known exactly. Therefore, this estimate serves as reference of conventional reconstruction when (µ a , µ s ) are known.
(MAP-CEM): The unknown h cem using incorrect fixed optical coefficients (µ a, * , µ s, * ) (Eq. 13). This estimate serves as a conventional reconstruction when nominal (µ a , µ s ) are incorrect. (MAP-AEM): The unknown h aem using the same fixed incorrect coefficients (µ a, * , µ s, * ) (Eq. 19). This estimate serves as a Bayesian approximation error model reconstruction when nominal (µ a , µ s ) are incorrect. The values (µ a, * , µ s, * ) in the computations correspond to the expectations of the (proper Gaussian smoothness) prior models π(µ a ) and π(µ s ), respectively. The prior means were homogeneous distributions with µ a, * (r) ≡ 0.01mm −1 and µ s, 3. Forward model. Let Ω ⊂ R n , n = 2,3, denote the object domain. In this paper we consider fDOT in the diffusive regime (i.e., imaging of highly scattering media). In the diffusive regime the most commonly used light transport model for excitation and fluorescence light is the diffusion approximation (DA) to the radiative transport equation (RTE). For further details of the diffusion approximation and its limitations, see [18,1]. In this paper the DC (zero-frequency) domain version of the diffusion approximation is used where Φ e (r) := Φ e is the excitation photon density, Φ f (r) := Φ f is the fluorescence emission photon density, µ a (r) := µ a is the absorption coefficient, κ(r) := κ is the diffusion coefficient. The diffusion coefficient κ is given by κ(r) = 1/(n(µ a (r) + µ s (r))), where µ s (r) := µ s is the reduced scattering coefficient. The optical properties of biological tissues do not typically vary significantly between the excitation and emission wavelengths [5,19,48]. Hence, it is a common approximation to omit the spectral dependence of the optical properties and model the (unknown) coeffcients (µ a , µ s ) the same at the excitation and emission wavelengths [43,48]. The parameter h(r) := h is the fluorophore concentration. The parameter q(r) is the strength of the light source at location r s ⊂ ∂Ω. The parameter ζ is a dimension dependent constant (ζ = 1/π when Ω ⊂ R 2 , ζ = 1/4 when Ω ⊂ R 3 ), α is a parameter governing the internal reflection at the boundary ∂Ω and ϑ is the outward normal to the boundary at point r. The measurable excitation data and fluorescence emission data (Eq. 2) are integrals of the exitance over the detector area where r d ⊂ ∂Ω are the detector locations.
For the inverse problem, the mapping A(µ a , µ s )h (Eq. (1)) is given by [38] where Φ e (r s , r) is the excitation photon density due to source q(r) and Ψ e (r d , r) is the adjoint solution (photon density due to sources placed at detector locations r d ).
The numerical approximation of the forward model (27)  In the 2D numerical studies, the domain Ω ⊂ R 2 was a disk with radius r = 25mm. The measurement setup consisted of 16 sources and 16 detectors. The source and detector optodes were modeled as 1mm wide surface patches located at equi-spaced angular intervals on the boundary ∂Ω. With this setup, the vector of fDOT measurements (2) was y ∈ R 256 . Five targets with different optical properties of absorption and scattering were used in the simulations (see Fig. 2, column 1, 2). The fluorophore concentrations of these targets is shown in column 1, Fig. 3. For the simulation of the measurement data, a mesh with 33806 nodes and 67098 triangular elements was used. In the inverse problem, a FEM mesh with 26075 nodes and 51636 elements was used.

3D simulations.
In the 3D simulations, the domain Ω ⊂ R 3 was a mouse atlas "Digimouse" [10,12]. The measurement setup consisted of 32 sources and 32 detectors (see Fig. 4, bottom right). The source and detector optodes were modeled as 1mm wide surface patches placed on the top-surface of the boundary ∂Ω. With this setup, the vector of fDOT measurements (2) was y ∈ R 512 . The optical properties of the target are shown in column 1, Fig. 4. The fluorophore concentrations of the target is shown in column 2, Fig. 4. For generating the measurement data and for the inverse problem we used the same mesh obtained from the Digimouse website [12] which had 58244 nodes and 306773 elements.
The simulated measurement data was generated using the FEM approximation of Eq (21)- (26). Random measurement noise, drawn from a zero-mean Gaussian distribution was added separately to the simulated measurement excitation and fluorescence emission data as, = y calc + e.
Here e f ∼ N (0, diag(σ e f ,1 , .., σ e f ,m )) ∈ R m is the noise added to the fluorescence emission data, e e ∼ N (0, diag(σ e e ,1 , .., σ e e ,m )) ∈ R m is the noise added to the excitation data. The standard deviations {σ e e ,1 , .., σ e e ,m } and {σ e f ,1 , .., σ e f ,m } were specified as 1% of the simulated noise free measurement excitation and fluorescence emission data, y e calc and y f calc respectively. In order to estimate the noise statistics for the inverse problem, a Gaussian approximation for the measurement noise e, π(e) = N (0, Γ e ) was constructed. The covariance Γ e was approximated by a diagonal where the diagonal elements (i.e., standard deviations of the measurements) were estimated as sample averages from 100 noisy realizations of the Born ratio (Eq. (28)). In a practical setup, similar estimation of Γ e can be carried out by repeated measurements from a phantom target.

4.2.
Prior models. For drawing the samples of optical coefficients x, where for the estimation of the approximation error statistics, we used a Gaussian (Markov random field) [42] prior model along with non-negativity constraint [26], (30) π(x) = π G (x)π + (x).
Here π G (x) is a Gaussian smoothness prior model and π + (x) is the non-negativity constraint. Section 4.2.1 describes the implementation of the Gaussian smoothness prior model, Section 4.2.2 describes the implementation of non-negativity constraint.

Proper Gaussian smoothness prior.
As we need to draw samples of the unknown x for the estimation of approximation errors, we need a proper (integrable) prior distribution for the unknowns. In this study we used a proper Gaussian smoothness prior π G (x) for the unknowns. In this model, the absorption, scattering and fluorophore concentration images µ a , µ s and h were modeled as mutually independent Gaussian random fields with a joint prior model In the construction of the mean vectors µ a, * , µ s, * , h * and covariances Γ µa , Γ µ s and Γ h , the random field, say f (i.e., either µ a , µ s or h), is considered in the form where f in is a spatially inhomogeneous parameter with zero mean, and f bg is a spatially constant (background) parameter with non-zero mean. For the latter, we can write f bg = qI, where I ∈ R n is a vector of ones and q is a scalar random variable with distribution q ∼ N (c, σ 2 bg,f ). In the construction of Γ in,f , the approximate correlation length can be adjusted to match the size of the expected inhomogeneities and the marginal variances of f k :s are tuned based on the expected contrast of the inclusions. We model the distributions f in and f bg as mutually independent, that is, the background is mutually independent with the inhomogeneities. Thus, we have [21,2,25] for further details, and see [28] for an alternative construction of a proper smoothness prior. In this paper, the correlation length for µ a , µ s and h in the prior was set as 16mm. The means and standard deviations of the background and inhomogeneities were chosen as mentioned in Table 1. The same prior parameters (correlation length, means and standard deviations) are used in 2D and 3D priors. absorption µ a, * = 0.01mm −1 σ bg,µa = 0.25×µ a, * σ in,µa = 0.5×µ a, * scattering µ s, * = 1mm −1 σ bg,µ s = 0.25×µ s, * σ in,µ s = 0.5×µ s, * Non-negativity constraint. In addition to the Gaussian smoothness prior, a non-negativity constraint was applied while drawing samples for the computation of approximation error statistics. A tolerance value for the values of x was chosen and the values of x drawn from prior π G (x) that were less than the tolerance value, were set equal to the tolerance value as, (32) x = max(x, tol).
Here "tol" is the tolerance value, tol = 10e −6 was used in this study. Two random draws of x = (µ a , µ s , h) T from π(x) in the 2D simulation domain are shown in Figure 1. The non-negativity prior during the computation of the MAP estimates (20), (13), (19) is taken into account by applying an exterior point search method [26]. In this method, the non-negativity problem is approximated by a sequence of unconstrained problems h MAP = arg min h {− log(π(h|y)) − log(π(h))}, where π(h|y) is the likelihood and π(h) is the prior probability distribution. E j (h) is a penalty functional that is used to penalize the negative components of the solution h, the super index j is used to denote the j th problem in the sequence. The mean h * ∈ R n and Cholesky factor L T h L h = Γ −1 h ∈ R n×n is obtained from the Gaussian smoothness prior model π G (x). In this study we employ a functional of the form In this paper, we selected the number of samples N s by studying the convergence of the MAP-AEM estimate in a single 2D simulation case with respect the number of samples used in the construction of AE statistics. That is, we computed ε * and Γ ε using different number of samples, and then selected the smallest N s that produced (essentially) the same estimate that was obtained using a very large number of samples in the training of the approximation error. This way we found out that N s = 700 samples was sufficient in 2D simulations. Similar tests were carried out for 3D simulations. The computation time for simulation of the approximation error statistics and the number of samples used in the computations are tabulated in Table 2. Table 2. Number of samples, N s used in the computation of AE statistics and the offline computation times, t CPU 's of the approximation errors in 2D and 3D.
No. of samples t CPU (seconds) 2D 700 413 3D 4000 62277 Note that the setting up of the error model is a computationally intensive task. The computation time for setting up the error model is roughly equivalent to the number of samples N s multiplied by the time for forward solution in the accurate and approximate models. However, the error model needs to be estimated only once for a fixed measurement setup and this estimation can be done offline. 5. Results and discussion.
are shown in column 2, Figure 3. This corresponds to the reference estimate of conventional reconstruction when (µ a , µ s ) are known exactly. (MAP-CEM): The MAP-CEM estimates with fixed optical coefficients (µ a, * , µ s, * ) are shown in column 3, Figure 3. This corresponds to estimate with conventional reconstruction when the nominal values of (µ a , µ s ) are incorrect.
are shown in column 4, Figure 3. This corresponds to estimate with Bayesian approximation error model when the nominal values of (µ a , µ s ) are incorrect. The relative error in the MAP estimates (40), (41), (42), where h is the estimated fluorophore distribution and h true is the true fluorophore distribution are shown in Table 3.
As can be seen from  Figure 2 (rows in respective order). Third column shows the MAP-CEM estimate using the incorrect absorption and scattering values (forward matrix A(µ a, * , µ s, * )). The µ a, * and µ s, * are shown in third and fourth column in Figure 2. The fourth column shows the MAP-AEM estimates using the same incorrect forward matrix A(µ a, * , µ s, * ).
approach produces estimates that are free of the artefacts and are almost as good as the reference estimates (MAP-REF) which have been computed using the correct values of (µ a , µ s ). The observation is also supported by the quantitative estimation errors in Table 3. Table 3. Relative error (%) in MAP estimates (40), (41), (42) for each 2D simulation test cases (test cases are numbered from top to bottom in Fig 1 and 2). Comparing MAP-CEM and MAP-AEM estimates to the reference estimate MAP-REF, it can be seen that the approximation error approach has again recovered well from the unknown absorption and scattering, leading to an estimate which shows good recovery of the hotspots (kidneys and lungs) and is free of the distortions and artefacts that can be seen in the MAP-CEM estimate.

5.3.
Discussion of the results. The results show that the Bayesian approximation error model efficiently compensates for the modeling errors caused by inaccurately known (µ a , µ s ) of the body and produce estimates that are qualitatively similar to the conventional estimates with exactly known µ a and µ s . The inclusions in the fluorescent contrast are well localized with the Bayesian approximation error model and the estimates are relatively free of the artifacts that are present in the estimates h cem with the conventional error model using the same incorrect values of (µ a, * , µ s, * ), see rows 2-5 in Figure 3 and the 3D results in Figure 4.
A notable feature in the reconstructions is that estimate of h has lower contrast in the reconstructions with the approximation error model than in the reconstructions with the conventional noise model. This can be explained by the fact that covariance of the combined noise e + ε is larger than the covariance of random noise (i.e., Γ e + Γ ε > Γ e ), implying that the relative weight of the data residual term compared to the prior (or regularization) term becomes smaller in the MAP estimate with the approximation error model compared to the conventional noise model. Thus, the estimate gets, loosely speaking, drawn more strongly towards the prior mean,  (µ a , µ s )). Fourth column shows MAP-CEM estimate using the incorrect absorption and scattering values (forward matrix A(µ a, * , µ s, * )). Fifth column shows MAP-AEM estimates using the same incorrect forward matrix A(µ a, * , µ s, * ). Bottom-right: Mouse surface with position of sources marked with yellow asterisk(*) and position of detectors marked with blue asterisks.
leading to a loss of contrast. This is also seen from the error estimates in Table  3. In the first row which corresponds to the case that (µ a, * , µ s, * ) are correct, the estimation error with the approximation error model is larger than with the conventional noise, and this discrepancy in the error arises from the lower contrast in the estimate of h with the approximation error model. However, when the values (µ a, * , µ s, * ) are incorrect, rows 2-5 in Table 3 and Figure 3, the estimation error with the approximation error model is smaller than with the conventional noise model, and moreover, the estimation error does not change much as the distance of (µ a, * , µ s, * ) from the true values (µ a , µ s ) increases when moving from row 2 to row 5 in the Table and Figure 3.
In this manuscript, the spectral dependence of the absorption and scattering coefficients was omitted and the parameters were assumed to be wavelength independent. We have also tested robustness of the proposed approach using simulations where the absorption and scatter are different at the excitation and emission wavelengths, and found out that the approximation about the spectral independence is valid for moderate changes (∼ 5%) in the values of the optical parameters between the emission and excitation wavelenghts. In cases where larger changes in values of (µ a , µ s ) (or even a completely different structure) of the parameters are expected, the extension of the approximation error model to a case where the absorption and scattering at the excitation and emission wavelengths (λ 1 and λ 2 ) are modelled mutually independent with separate prior models, and the approximation error is modelled as ε = Ã (µ a,λ1 , µ s,λ1 , µ a,λ2 , µ s,λ2 )h − A(µ a, * , µ s, * )h , whereÃ(µ a,λ1 , µ s,λ1 , µ a,λ2 , µ s,λ2 ) is a forward model that takes into account the spectral dependence of (µ a , µ s ), is straightforward. 6. Conclusions. In this paper the recovery from errors caused by incorrectly modeled absorption and scattering in fDOT was considered. Born ratio is known to tolerate artefacts due to unknown absorption and scattering to some extent. However, in case the absorption and scattering properties are highly heterogeneous, incorrectly modelled absorption and scattering induces errors in the fDOT reconstructions.
In this paper, the Bayesian approximation error approach was applied for the compensation of the errors caused by unknown absorption and scattering in fDOT. The modeling errors caused by the inaccurately known absorption and scattering were modeled as an additive modeling error noise in the observation model, and the posterior density model was then marginalized approximately over the unknown modeling errors by using a Gaussian approximation for the joint statistics of the primary unknown (fluorophore concentration) and the modeling errors.
The approach was tested with 2D simulations with various target distributions of absorption and scattering. The results show that the approximation error model can efficiently compensate for the reconstruction artefacts caused by unknown absorption and scattering coefficients, even in the cases of highly heterogeneous absorption and scattering coefficients where the conventional estimates using the Born ratio contained severe artefacts. The approach was also tested with a 3D simulation using a mouse atlas. The MAP estimates using the Bayesian approximation error model show better localisation of the fluorophore concentration compared to the conventional estimate with the normalised Born approximation model.
A tradeoff of the Bayesian approximation error model was found to be a small loss in contrast of the estimated fluorophore concentrations. We suggest that the approximation error model would be a feasible complement to the Born ratio model for handling the uncertainty related to the unknown absorption and scattering parameters in fDOT.