ISSN:
 1930-8337

eISSN:
 1930-8345

All Issues

Volume 15, 2021

Volume 14, 2020

Volume 13, 2019

Volume 12, 2018

Volume 11, 2017

Volume 10, 2016

Volume 9, 2015

Volume 8, 2014

Volume 7, 2013

Volume 6, 2012

Volume 5, 2011

Volume 4, 2010

Volume 3, 2009

Volume 2, 2008

Volume 1, 2007

Inverse Problems & Imaging

February 2021 , Volume 15 , Issue 1

Special issue on mathematical/statistical approaches in data sciences

Select all articles

Export/Reference:

IPI special issue on "mathematical/statistical approaches in data science" in the Inverse Problem and Imaging
Weihong Guo, Yifei Lou, Jing Qin and Ming Yan
2021, 15(1): I-I doi: 10.3934/ipi.2021007 +[Abstract](369) +[HTML](122) +[PDF](76.17KB)
Abstract:
Reproducible kernel Hilbert space based global and local image segmentation
Liam Burrows, Weihong Guo, Ke Chen and Francesco Torella
2021, 15(1): 1-25 doi: 10.3934/ipi.2020048 +[Abstract](774) +[HTML](305) +[PDF](5192.41KB)
Abstract:

Image segmentation is the task of partitioning an image into individual objects, and has many important applications in a wide range of fields. The majority of segmentation methods rely on image intensity gradient to define edges between objects. However, intensity gradient fails to identify edges when the contrast between two objects is low. In this paper we aim to introduce methods to make such weak edges more prominent in order to improve segmentation results of objects of low contrast. This is done for two kinds of segmentation models: global and local. We use a combination of a reproducing kernel Hilbert space and approximated Heaviside functions to decompose an image and then show how this decomposition can be applied to a segmentation model. We show some results and robustness to noise, as well as demonstrating that we can combine the reconstruction and segmentation model together, allowing us to obtain both the decomposition and segmentation simultaneously.

Automatic extraction of cell nuclei using dilated convolutional network
Rajendra K C Khatri, Brendan J Caseria, Yifei Lou, Guanghua Xiao and Yan Cao
2021, 15(1): 27-40 doi: 10.3934/ipi.2020049 +[Abstract](782) +[HTML](297) +[PDF](2983.82KB)
Abstract:

Pathological examination has been done manually by visual inspection of hematoxylin and eosin (H&E)-stained images. However, this process is labor intensive, prone to large variations, and lacking reproducibility in the diagnosis of a tumor. We aim to develop an automatic workflow to extract different cell nuclei found in cancerous tumors portrayed in digital renderings of the H&E-stained images. For a given image, we propose a semantic pixel-wise segmentation technique using dilated convolutions. The architecture of our dilated convolutional network (DCN) is based on SegNet, a deep convolutional encoder-decoder architecture. For the encoder, all the max pooling layers in the SegNet are removed and the convolutional layers are replaced by dilated convolution layers with increased dilation factors to preserve image resolution. For the decoder, all max unpooling layers are removed and the convolutional layers are replaced by dilated convolution layers with decreased dilation factors to remove gridding artifacts. We show that dilated convolutions are superior in extracting information from textured images. We test our DCN network on both synthetic data sets and a public available data set of H&E-stained images and achieve better results than the state of the art.

Global convergence and geometric characterization of slow to fast weight evolution in neural network training for classifying linearly non-separable data
Ziang Long, Penghang Yin and Jack Xin
2021, 15(1): 41-62 doi: 10.3934/ipi.2020077 +[Abstract](344) +[HTML](81) +[PDF](968.88KB)
Abstract:

In this paper, we study the dynamics of gradient descent in learning neural networks for classification problems. Unlike in existing works, we consider the linearly non-separable case where the training data of different classes lie in orthogonal subspaces. We show that when the network has sufficient (but not exceedingly large) number of neurons, (1) the corresponding minimization problem has a desirable landscape where all critical points are global minima with perfect classification; (2) gradient descent is guaranteed to converge to the global minima. Moreover, we discovered a geometric condition on the network weights so that when it is satisfied, the weight evolution transitions from a slow phase of weight direction spreading to a fast phase of weight convergence. The geometric condition says that the convex hull of the weights projected on the unit sphere contains the origin.

Some worst-case datasets of deterministic first-order methods for solving binary logistic regression
Yuyuan Ouyang and Trevor Squires
2021, 15(1): 63-77 doi: 10.3934/ipi.2020047 +[Abstract](565) +[HTML](241) +[PDF](335.47KB)
Abstract:

We present in this paper some worst-case datasets of deterministic first-order methods for solving large-scale binary logistic regression problems. Under the assumption that the number of algorithm iterations is much smaller than the problem dimension, with our worst-case datasets it requires at least \begin{document}$ {{{\mathcal O}}}(1/\sqrt{\varepsilon}) $\end{document} first-order oracle inquiries to compute an \begin{document}$ \varepsilon $\end{document}-approximate solution. From traditional iteration complexity analysis point of view, the binary logistic regression loss functions with our worst-case datasets are new worst-case function instances among the class of smooth convex optimization problems.

Stochastic greedy algorithms for multiple measurement vectors
Jing Qin, Shuang Li, Deanna Needell, Anna Ma, Rachel Grotheer, Chenxi Huang and Natalie Durgin
2021, 15(1): 79-107 doi: 10.3934/ipi.2020066 +[Abstract](400) +[HTML](138) +[PDF](681.63KB)
Abstract:

Sparse representation of a single measurement vector (SMV) has been explored in a variety of compressive sensing applications. Recently, SMV models have been extended to solve multiple measurement vectors (MMV) problems, where the underlying signal is assumed to have joint sparse structures. To circumvent the NP-hardness of the \begin{document}$ \ell_0 $\end{document} minimization problem, many deterministic MMV algorithms solve the convex relaxed models with limited efficiency. In this paper, we develop stochastic greedy algorithms for solving the joint sparse MMV reconstruction problem. In particular, we propose the MMV Stochastic Iterative Hard Thresholding (MStoIHT) and MMV Stochastic Gradient Matching Pursuit (MStoGradMP) algorithms, and we also utilize the mini-batching technique to further improve their performance. Convergence analysis indicates that the proposed algorithms are able to converge faster than their SMV counterparts, i.e., concatenated StoIHT and StoGradMP, under certain conditions. Numerical experiments have illustrated the superior effectiveness of the proposed algorithms over their SMV counterparts.

Fast algorithms for robust principal component analysis with an upper bound on the rank
Ningyu Sha, Lei Shi and Ming Yan
2021, 15(1): 109-128 doi: 10.3934/ipi.2020067 +[Abstract](408) +[HTML](147) +[PDF](2981.0KB)
Abstract:

The robust principal component analysis (RPCA) decomposes a data matrix into a low-rank part and a sparse part. There are mainly two types of algorithms for RPCA. The first type of algorithm applies regularization terms on the singular values of a matrix to obtain a low-rank matrix. However, calculating singular values can be very expensive for large matrices. The second type of algorithm replaces the low-rank matrix as the multiplication of two small matrices. They are faster than the first type because no singular value decomposition (SVD) is required. However, the rank of the low-rank matrix is required, and an accurate rank estimation is needed to obtain a reasonable solution. In this paper, we propose algorithms that combine both types. Our proposed algorithms require an upper bound of the rank and SVD on small matrices. First, they are faster than the first type because the cost of SVD on small matrices is negligible. Second, they are more robust than the second type because an upper bound of the rank instead of the exact rank is required. Furthermore, we apply the Gauss-Newton method to increase the speed of our algorithms. Numerical experiments show the better performance of our proposed algorithms.

Adversarial defense via the data-dependent activation, total variation minimization, and adversarial training
Bao Wang, Alex Lin, Penghang Yin, Wei Zhu, Andrea L. Bertozzi and Stanley J. Osher
2021, 15(1): 129-145 doi: 10.3934/ipi.2020046 +[Abstract](672) +[HTML](270) +[PDF](2051.93KB)
Abstract:

We improve the robustness of Deep Neural Net (DNN) to adversarial attacks by using an interpolating function as the output activation. This data-dependent activation remarkably improves both the generalization and robustness of DNN. In the CIFAR10 benchmark, we raise the robust accuracy of the adversarially trained ResNet20 from \begin{document}$ \sim 46\% $\end{document} to \begin{document}$ \sim 69\% $\end{document} under the state-of-the-art Iterative Fast Gradient Sign Method (IFGSM) based adversarial attack. When we combine this data-dependent activation with total variation minimization on adversarial images and training data augmentation, we achieve an improvement in robust accuracy by 38.9\begin{document}$ \% $\end{document} for ResNet56 under the strongest IFGSM attack. Furthermore, We provide an intuitive explanation of our defense by analyzing the geometry of the feature space.

A new initialization method based on normed statistical spaces in deep networks
Hongfei Yang, Xiaofeng Ding, Raymond Chan, Hui Hu, Yaxin Peng and Tieyong Zeng
2021, 15(1): 147-158 doi: 10.3934/ipi.2020045 +[Abstract](592) +[HTML](267) +[PDF](286.77KB)
Abstract:

Training deep neural networks can be difficult. For classical neural networks, the initialization method by Xavier and Yoshua which is later generalized by He, Zhang, Ren and Sun can facilitate stable training. However, with the recent development of new layer types, we find that the above mentioned initialization methods may fail to lead to successful training. Based on these two methods, we will propose a new initialization by studying the parameter space of a network. Our principal is to put constrains on the growth of parameters in different layers in a consistent way. In order to do so, we introduce a norm to the parameter space and use this norm to measure the growth of parameters. Our new method is suitable for a wide range of layer types, especially for layers with parameter-sharing weight matrices.

Fast non-convex low-rank matrix decomposition for separation of potential field data using minimal memory
Dan Zhu, Rosemary A. Renaut, Hongwei Li and Tianyou Liu
2021, 15(1): 159-183 doi: 10.3934/ipi.2020076 +[Abstract](423) +[HTML](158) +[PDF](8790.84KB)
Abstract:

A fast non-convex low-rank matrix decomposition method for potential field data separation is presented. The singular value decomposition of the large size trajectory matrix, which is also a block Hankel matrix, is obtained using a fast randomized singular value decomposition algorithm in which fast block Hankel matrix-vector multiplications are implemented with minimal memory storage. This fast block Hankel matrix randomized singular value decomposition algorithm is integrated into the \begin{document}$\text{Altproj}$\end{document} algorithm, which is a standard non-convex method for solving the robust principal component analysis optimization problem. The integration of this improved estimation for the partial singular value decomposition avoids the construction of the trajectory matrix in the robust principal component analysis optimization problem. Hence, gravity and magnetic data matrices of large size can be computed and potential field data separation is achieved with better computational efficiency. The presented algorithm is also robust and, hence, algorithm-dependent parameters are easily determined. The performance of the algorithm, with and without the efficient estimation of the low rank matrix, is contrasted for the separation of synthetic gravity and magnetic data matrices of different sizes. These results demonstrate that the presented algorithm is not only computationally more efficient but it is also more accurate. Moreover, it is possible to solve far larger problems. As an example, for the adopted computational environment, matrices of sizes larger than \begin{document}$ 205 \times 205 $\end{document} generate "out of memory" exceptions without the improvement, whereas a matrix of size \begin{document}$ 2001\times 2001 $\end{document} can now be calculated in \begin{document}$ 1062.29 $\end{document}s. Finally, the presented algorithm is applied to separate real gravity and magnetic data in the Tongling area, Anhui province, China. Areas which may exhibit mineralizations are inferred based on the separated anomalies.

2019  Impact Factor: 1.373

Editors

Referees

Librarians

Email Alert

[Back to Top]