All Issues

Volume 5, 2022

Volume 4, 2021

Volume 3, 2020

Volume 2, 2019

Volume 1, 2018

Mathematical Foundations of Computing

November 2020 , Volume 3 , Issue 4

Special issue on analysis in data science: Methods and applications

Select all articles


Preface of the special issue on analysis in data science: Methods and applications
Xin Guo and Lei Shi
2020, 3(4): i-ii doi: 10.3934/mfc.2020026 +[Abstract](1060) +[HTML](415) +[PDF](105.78KB)
Sketch-based image retrieval via CAT loss with elastic net regularization
Jia Cai, Guanglong Xu and Zhensheng Hu
2020, 3(4): 219-227 doi: 10.3934/mfc.2020013 +[Abstract](1604) +[HTML](563) +[PDF](341.5KB)

Fine-grained sketch-based image retrieval (FG-SBIR) is an important problem that uses free-hand human sketch as queries to perform instance-level retrieval of photos. Human sketches are generally highly abstract and iconic, which makes FG-SBIR a challenging task. Existing FG-SBIR approaches using triplet loss with \begin{document}$ \ell_2 $\end{document} regularization or higher-order energy function to conduct retrieval performance, which neglect the feature gap between different domains (sketches, photos) and need to select the weight layer matrix. This yields high computational complexity. In this paper, we define a new CAT loss function with elastic net regularization based on attention model. It can close the feature gap between different subnetworks and embody the sparsity of the sketches. Experiments demonstrate that the proposed approach is competitive with state-of-the-art methods.

Inpainting via sparse recovery with directional constraints
Xuemei Chen and Julia Dobrosotskaya
2020, 3(4): 229-247 doi: 10.3934/mfc.2020025 +[Abstract](770) +[HTML](279) +[PDF](534.02KB)

Image inpainting is a particular case of image completion problem. We describe a novel method allowing to amend the general scenario of using sparse or TV-based recovery for inpainting purposes by an efficient use of adaptive one-dimensional directional "sensing" into the unknown domain. We analyze the smoothness of the image near each pixel on the boundary of the unknown domain and formulate linear constraints designed to promote smooth transitions from the known domain in the directions where smooth behavior have been detected. We include a theoretical result relaxing the widely known sufficient condition of sparse recovery based on coherence, as well as observations on how adding the directional constraints can improve the well-posedness of sparse inpainting.

The numerical implementation of our method is based on ADMM. Examples of inpainting of natural images and binary images with edges crossing the unknown domain demonstrate significant improvement of recovery quality in the presence of adaptive directional constraints. We conclude that the introduced framework is general enough to offer a lot of flexibility and be successfully utilized in a multitude of image recovery scenarios.

Network centralities, demographic disparities, and voluntary participation
Qiang Fu, Yanlong Zhang, Yushu Zhu and Ting Li
2020, 3(4): 249-262 doi: 10.3934/mfc.2020011 +[Abstract](2143) +[HTML](536) +[PDF](987.28KB)

This article explores racial and gender disparities in civic-network centrality using various social network methods and regression models. We find that civic networks of women and whites exhibit greater network centrality than their counterparts do. Religious organizations are the hub of civic networks, while labor unions and ethnic/civil-rights organizations are more peripheral. Whites tend to have job-related and nondomestic organizations as the core of their civic network. Women rely on domestic organizations and show little advantage over men in overlapping memberships of voluntary associations. These findings provide a more holistic view of racial and gender disparities in social networks.

Modeling interactive components by coordinate kernel polynomial models
Xin Guo, Lexin Li and Qiang Wu
2020, 3(4): 263-277 doi: 10.3934/mfc.2020010 +[Abstract](1832) +[HTML](536) +[PDF](379.41KB)

We proposed the use of coordinate kernel polynomials in kernel regression. This new approach, called coordinate kernel polynomial regression, can simultaneously identify active variables and effective interactive components. Reparametrization refinement is found critical to improve the modeling accuracy and prediction power. The post-training component selection allows one to identify effective interactive components. Generalization error bounds are used to explain the effectiveness of the algorithm from a learning theory perspective and simulation studies are used to show its empirical effectiveness.

Support vector machine classifiers by non-Euclidean margins
Ying Lin and Qi Ye
2020, 3(4): 279-300 doi: 10.3934/mfc.2020018 +[Abstract](2332) +[HTML](587) +[PDF](2059.22KB)

In this article, the classical support vector machine (SVM) classifiers are generalized by the non-Euclidean margins. We first extend the linear models of the SVM classifiers by the non-Euclidean margins including the theorems and algorithms of the SVM classifiers by the hard margins and the soft margins. Specially, the SVM classifiers by the \begin{document}$ \infty $\end{document}-norm margins can be solved by the 1-norm optimization with sparsity. Next, we show that the non-linear models of the SVM classifiers by the \begin{document}$ q $\end{document}-norm margins can be equivalently transferred to the SVM in the \begin{document}$ p $\end{document}-norm reproducing kernel Banach spaces given by the hinge loss, where \begin{document}$ 1/p+1/q = 1 $\end{document}. Finally, we illustrate the numerical examples of artificial data and real data to compare the different algorithms of the SVM classifiers by the \begin{document}$ \infty $\end{document}-norm margin.

AIMS: Average information matrix splitting
Shengxin Zhu, Tongxiang Gu and Xingping Liu
2020, 3(4): 301-308 doi: 10.3934/mfc.2020012 +[Abstract](1644) +[HTML](513) +[PDF](283.9KB)

For linear mixed models with co-variance matrices which are not linearly dependent on variance component parameters, we prove that the average of the observed information and the Fisher information can be split into two parts. The essential part enjoys a simple and computational friendly formula, while the other part which involves a lot of computations is a random zero matrix and thus is negligible.

2021 CiteScore: 0.2



Special Issues

Email Alert

[Back to Top]