Article Contents
Article Contents

# Denoising convolution algorithms and applications to SAR signal processing

• *Corresponding author: Semyon Tsynkov
• Convolutions are one of the most important operations in signal processing. They often involve large arrays and require significant computing time. Moreover, in practice, the signal data to be processed by convolution may be corrupted by noise. In this paper, we introduce a new method for computing the convolutions in the quantized tensor train (QTT) format and removing noise from data using the QTT decomposition. We demonstrate the performance of our method using a common mathematical model for synthetic aperture radar (SAR) processing that involves a sinc kernel and present the entire cost of decomposing the original data array, computing the convolutions, and then reformatting the data back into full arrays.

Mathematics Subject Classification: Primary: 15A69, 65F55, 86A22; Secondary: 68W20.

 Citation:

• Figure 1.  Kernel function (7) with $\Delta_x = 0.04\pi$

Figure 2.  Top Left: True convolution of data without noise, ${\mathit{\boldsymbol{I}}}$. Top Right: Function data with noise, ${\mathit{\boldsymbol{f}}}_\xi$. Middle Left: True convolution of data with noise, ${\mathit{\boldsymbol{I}}}_\xi$. Middle Right: Convolution using the max rank TT-SVD algorithm, ${\mathit{\boldsymbol{I}}}_{QTT_0}$. Bottom Left: Absolute error of ${\mathit{\boldsymbol{I}}}_\xi$. Bottom Right: Absolute error of ${\mathit{\boldsymbol{I}}}_{QTT_0}$

Figure 3.  Top Left: True convolution of data without noise, ${\mathit{\boldsymbol{I}}}$. Top Right: Zoomed in graph of ${\mathit{\boldsymbol{I}}}_{\xi}$, ${\mathit{\boldsymbol{I}}}_{QTT_0}$, and ${\mathit{\boldsymbol{I}}}_{ref}$. Bottom Left: Absolute error of ${\mathit{\boldsymbol{I}}}_\xi$. Bottom Right: Absolute error of ${\mathit{\boldsymbol{I}}}_{QTT_0}$

Figure 4.  Top Left: True convolution of data without noise, ${\mathit{\boldsymbol{I}}}$. Top Right: Function data with noise, ${\mathit{\boldsymbol{f}}}_\xi$. Middle Left: True convolution of data with noise, ${\mathit{\boldsymbol{I}}}_\xi$. Middle Right: Convolution using the max rank TT-SVD algorithm, ${\mathit{\boldsymbol{I}}}_{QTT_0}$. Bottom Left: Absolute error of ${\mathit{\boldsymbol{I}}}_\xi$. Bottom Right: Absolute error of ${\mathit{\boldsymbol{I}}}_{QTT_0}$

Table 1.  $l_2$-norm relative error for $K = 20$ for examples 1 and 2, and $K = 10$ for example 3

 example ${\mathit{\boldsymbol{I}}}_\xi$ ${\mathit{\boldsymbol{I}}}_{QTT_0}$ ${\mathit{\boldsymbol{I}}}_{QTT_r}$ ${\mathit{\boldsymbol{I}}}_{\delta}$ ${\mathit{\boldsymbol{I}}}_{RTT}$ ${\mathit{\boldsymbol{I}}}_{lr}$ $1$ $0.0383$ $\textbf{0.0028}$ $0.0102$ $0.0280_{\delta=0.02}$ $0.0430$ - $2$ $0.0131$ $\textbf{0.0011}$ $0.0075$ $0.0068_{\delta=0.01}$ $0.0201$ - $3$ $0.1142$ $\textbf{0.0151}$ $0.0447$ $0.1650_{\delta=0.09}$ $0.1534$ $0.0470_{rank=23}$

Table 2.  Run time (seconds): Example 1 convolutions

 K ${\mathit{\boldsymbol{I}}}_\xi$ ${\mathit{\boldsymbol{I}}}_{QTT_0}$ ${\mathit{\boldsymbol{I}}}_{QTT_r}$ ${\mathit{\boldsymbol{I}}}_{\delta}$ ${\mathit{\boldsymbol{I}}}_{RTT}$ ${\mathit{\boldsymbol{I}}}_{QTT_0}/ {\mathit{\boldsymbol{I}}}_\xi$ $16$ $\textbf{0.005}$ $0.325$ $0.369$ $1.485_{\delta=0.02}$ $0.414$ 65 $20$ $\textbf{0.067}$ $0.653$ $0.694$ $0.479_{\delta=0.02}$ $0.609$ 9.7463 $24$ $\textbf{1.21}$ $4.41$ $4.86$ $3.10_{\delta=0.02}$ $2.80$ 3.6446 $26$ $\textbf{5.77}$ $17.75$ $19.68$ $13.93_{\delta=0.02}$ $10.97$ 3.0763 $28$ $66.6$ $95.5$ $108.7$ $79.0_{\delta=0.02}$ $\textbf{62.49}$ 1.4339

Table 3.  Data storage for Example 1

 K $f_\xi$ $\boldsymbol{\mathcal{F}}_{\xi}$ $16$ $65{,}536$ $2088$ $20$ $1{,}048{,}576$ $2888$ $24$ $16{,}777{,}216$ $3688$ $26$ $67{,}108{,}864$ $4088$ $28$ $268{,}435{,}456$ $4488$

Table 4.  Run times (seconds): Example 3 convolutions

 K ${\mathit{\boldsymbol{I}}}_\xi$ ${\mathit{\boldsymbol{I}}}_{QTT_0}$ ${\mathit{\boldsymbol{I}}}_{QTT_r}$ ${\mathit{\boldsymbol{I}}}_{\delta}$ ${\mathit{\boldsymbol{I}}}_{RTT}$ ${\mathit{\boldsymbol{I}}}_{lr}$ ${\mathit{\boldsymbol{I}}}_{QTT_0}/ {\mathit{\boldsymbol{I}}}_\xi$ $8$ $\textbf{0.0034}$ $0.2213$ $0.310$ $7.102_{\delta=0.04}$ $0.297$ $0.0090_{rank=2}$ 65.088 $10$ $\textbf{0.0629}$ $0.9209$ $1.428$ $0.670_{\delta=0.04}$ $1.321$ $0.154_{rank=2}$ 14.651 $12$ $\textbf{0.948}$ $8.6462$ $11.258$ $22.79_{\delta=0.04}$ $10.27$ $5.15_{rank=2}$ 9.121 $14$ $\textbf{58.67}$ $147.8$ $151.05$ $1087_{\delta=0.04}$ $164.5$ $286.2_{rank=2}$ 2.519

Table 5.  Data storage for Example 3

 K ${\boldsymbol{f}}_\xi$ $\boldsymbol{\mathcal{F}}_\xi$ $8$ $65{,}536$ $2088$ $10$ $1{,}048{,}576$ $2888$ $12$ $16{,}777{,}216$ $3688$ $14$ $268{,}435{,}456$ $4488$
•  [1] M. Che and Y. Wei, Randomized algorithms for the approximations of Tucker and the tensor train decompositions, Adv Comput Math, 45 (2019), 395-428.  doi: 10.1007/s10444-018-9622-8. [2] M. Cheney and B. Borden, Fundamentals of Radar Imaging, vol. 79 of CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, 2009. doi: 10.1137/1.9780898719291. [3] S. Dolgov, B. Khoromskij and D. Savostyanov, Superfast Fourier transform using QTT approximation, J. Fourier Anal. Appl., 18 (2012), 915-953.  doi: 10.1007/s00041-012-9227-4. [4] B. P. Epps and E. M. Krivitzky, Singular value decomposition of noisy data: Noise filtering, Experiments in Fluids, 60 (2019), 1-23.  doi: 10.1007/s00348-019-2768-4. [5] K. Fonałand R. Zdunek, Distributed and randomized tensor train decomposition for feature extraction, in 2019 International Joint Conference on Neural Networks (IJCNN), 2019, 1-8. [6] M. Gilman, E. Smith and S. Tsynkov, Transionospheric Synthetic Aperture Imaging, Applied and Numerical Harmonic Analysis, Birkhäuser/Springer, Cham, Switzerland, 2017. doi: 10.1007/978-3-319-52127-5. [7] M. Gilman and S. Tsynkov, A mathematical perspective on radar interferometry, Inverse Problems & Imaging, 16 (2022), 119-152.  doi: 10.3934/ipi.2021043. [8] X. Gong, W. Chen, J. Chen and B. Ai, Tensor denoising using low-rank tensor train decomposition, IEEE Signal Processing Letters, 27 (2020), 1685-1689.  doi: 10.1109/LSP.2020.3025038. [9] L. Grasedyck, Polynomial Approximation in Hierarchical Tucker Format by Vector–Tensorization, Technical Report 308, Institut für Geometrie und Praktische Mathematik RWTH Aachen, 2010. [10] W. Hackbusch and S. Kühn, A new scheme for the tensor representation, J. Fourier Anal. Appl., 15 (2009), 706-722.  doi: 10.1007/s00041-009-9094-9. [11] N. Halko, P. G. Martinsson and J. A. Tropp, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions, SIAM Rev., 53 (2011), 217-288.  doi: 10.1137/090771806. [12] R. Harshman, Foundations of the PARAFAC procedure: Models and conditions for an explanatory multimodal factor analysis, UCLA Working Papers in Phonetics, 16 (1970), 1-84. [13] B. Huber, R. Schneider and S. Wolf, A randomized tensor train singular value decomposition, in Compressed Sensing and its Applications, Appl. Numer. Harmon. Anal., Birkhäuser/Springer, Cham, 2017,261-290. [14] S. K. Jha and R. D. S. Yadava, Denoising by singular value decomposition and its application to electronic nose data processing, IEEE Sensors Journal, 11 (2011), 35-44.  doi: 10.1109/JSEN.2010.2049351. [15] V. A. Kazeev, B. N. Khoromskij and E. E. Tyrtyshnikov, Multilevel Toeplitz matrices generated by tensor-structured vectors and convolution with logarithmic complexity, SIAM J. Sci. Comput., 35 (2013), A1511-A1536. doi: 10.1137/110844830. [16] B. N. Khoromskij, $O(d\log N)$-quantics approximation of $N$-$d$ tensors in high-dimensional numerical modeling, Constr. Approx., 34 (2011), 257-280.  doi: 10.1007/s00365-011-9131-1. [17] J. Li, X.-P. Zhang and T. Tran, Point cloud denoising based on tensor tucker decomposition, in 2019 IEEE International Conference on Image Processing (ICIP), (2019), 4375-4379. doi: 10.1109/ICIP.2019.8803602. [18] L. Li, W. Yu and K. Batselier, Faster tensor train decomposition for sparse data, Journal of Computational and Applied Mathematics, 405 (2022), 113972.  doi: 10.1016/j.cam.2021.113972. [19] Y. Nomura, K. Yamamoto, S. Anada, T. Hirayama, E. Igaki and K. Saitoh, Denoising of series electron holograms using tensor decomposition, Microscopy, 70 (2020), 255-264.  doi: 10.1093/jmicro/dfaa057. [20] I. V. Oseledets, Approximation of $2^d\times 2^d$ matrices using tensor decomposition, SIAM J. Matrix Anal. Appl., 31 (2009/10), 2130-2145.  doi: 10.1137/090757861. [21] I. V. Oseledets, Tensor-train decomposition, SIAM Journal on Scientific Computing, 33 (2011), 2295-2317.  doi: 10.1137/090752286. [22] I. V. Oseledets, Constructive representation of functions in low-rank tensor formats, Constr. Approx., 37 (2013), 1-18.  doi: 10.1007/s00365-012-9175-x. [23] I. V. Oseledets and E. E. Tyrtyshnikov, Breaking the curse of dimensionality, or how to use SVD in many dimensions, SIAM J. Sci. Comput., 31 (2009), 3744-3759.  doi: 10.1137/090748330. [24] I. Oseledets and E. Tyrtyshnikov, TT-cross approximation for multidimensional arrays, Linear Algebra Appl., 432 (2010), 70-88.  doi: 10.1016/j.laa.2009.07.024. [25] M. V. Rakhuba and I. V. Oseledets, Fast multidimensional convolution in low-rank tensor formats via cross approximation, SIAM J. Sci. Comput., 37 (2015), A565-A582. doi: 10.1137/140958529. [26] T. Shi, M. Ruth and A. Townsend, Parallel algorithms for computing the tensor-train decomposition, 2021, https://arXiv.org/abs/2111.10448. [27] L. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 31 (1966), 279-311.  doi: 10.1007/BF02289464.

Figures(4)

Tables(5)