eISSN:
 2577-8838

All Issues

Volume 3, 2020

Volume 2, 2019

Volume 1, 2018

Mathematical Foundations of Computing

August 2020 , Volume 3 , Issue 3

Select all articles

Export/Reference:

A fast matching algorithm for the images with large scale disparity
Shichu Chen, Zhiqiang Wang and Yan Ren
2020, 3(3): 141-155 doi: 10.3934/mfc.2020021 +[Abstract](350) +[HTML](161) +[PDF](944.11KB)
Abstract:

With the expansion of application areas of unmanned aerial vehicle (UAV) applications, there is a rising demand to realize UAV navigation by means of computer vision. Speeded-Up Robust Features (SURF) is an ideal image matching algorithm to be applied to solve the location for UAV. However, if there is a large scale difference between two images with the same scene taken by UAV and satellite respectively, it is difficult to apply SURF to complete the accurate image matching directly. In this paper, a fast image matching algorithm which can bridge the huge scale gap is proposed. The fast matching algorithm searches an optimal scaling ratio based on the ground distance represented by pixel. Meanwhile, a validity index for validating the performance of matching is given. The experimental results illustrate that the proposed algorithm performs better performance both on speed and accuracy. What's more, the proposed algorithm can also obtain the correct matching results on the images with rotation. Therefore, the proposed algorithm could be applied to location and navigation for UAV in future.

Summation of Gaussian shifts as Jacobi's third Theta function
Shengxin Zhu
2020, 3(3): 157-163 doi: 10.3934/mfc.2020015 +[Abstract](325) +[HTML](195) +[PDF](306.35KB)
Abstract:

A proper choice of parameters of the Jacobi modular identity (Jacobi Imaginary transformation) implies that the summation of Gaussian shifts on infinity periodic grids can be represented as the Jacobi's third Theta function. As such, connection between summation of Gaussian shifts and the solution to a Schrödinger equation is explicitly shown. A concise and controllable upper bound of the saturation error for approximating constant functions with summation of Gaussian shifts can be immediately obtained in terms of the underlying shape parameter of the Gaussian. This sheds light on how to choose a shape parameter and provides further understanding on using Gaussians with increasingly flatness.

Modal additive models with data-driven structure identification
Tieliang Gong, Chen Xu and Hong Chen
2020, 3(3): 165-183 doi: 10.3934/mfc.2020016 +[Abstract](337) +[HTML](207) +[PDF](640.33KB)
Abstract:

Additive models, due to their high flexibility, have received a great deal of attention in high dimensional regression analysis. Many efforts have been made on capturing interactions between predictive variables within additive models. However, typical approaches are designed based on conditional mean assumptions, which may fail to reveal the structure when data is contaminated by heavy-tailed noise. In this paper, we propose a penalized modal regression method, Modal Additive Models (MAM), based on a conditional mode assumption for simultaneous function estimation and structure identification. MAM approximates the non-parametric function through forward neural networks, and maximizes modal risk with constraints on the function space and group structure. The proposed approach can be implemented by the half-quadratic (HQ) optimization technique, and its asymptotic estimation and selection consistency are established. It turns out that MAM can achieve satisfactory learning rate and identify the target group structure with high probability. The effectiveness of MAM is also supported by some simulated examples.

Averaging versus voting: A comparative study of strategies for distributed classification
Donglin Wang, Honglan Xu and Qiang Wu
2020, 3(3): 185-193 doi: 10.3934/mfc.2020017 +[Abstract](327) +[HTML](166) +[PDF](275.0KB)
Abstract:

In this paper we proposed two strategies, averaging and voting, to implement distributed classification via the divide and conquer approach. When a data set is too big to be processed by one processor or is naturally stored in different locations, the method partitions the whole data into multiple subsets randomly or according to their locations. Then a base classification algorithm is applied to each subset to produce a local classification model. Finally, averaging or voting is used to couple the local models together to produce the final classification model. We performed thorough empirical studies to compare the two strategies. The results show that averaging is more effective in most scenarios.

The nonexistence of global solution for system of q-difference inequalities
Yaoyao Luo and Run Xu
2020, 3(3): 195-203 doi: 10.3934/mfc.2020019 +[Abstract](318) +[HTML](179) +[PDF](282.68KB)
Abstract:

In this paper, we obtain sufficient conditions for the nonexistence of global solutions for the system of \begin{document}$ q $\end{document}-difference inequalities. Our approach is based on the weak formulation of the problem, a particular choice of the test function, and some \begin{document}$ q $\end{document}-integral inequalities.

Sparse regularized learning in the reproducing kernel banach spaces with the $ \ell^1 $ norm
Ying Lin, Rongrong Lin and Qi Ye
2020, 3(3): 205-218 doi: 10.3934/mfc.2020020 +[Abstract](447) +[HTML](183) +[PDF](453.17KB)
Abstract:

We present a sparse representer theorem for regularization networks in a reproducing kernel Banach space with the \begin{document}$ \ell^1 $\end{document} norm by the theory of convex analysis. The theorem states that extreme points of the solution set of regularization networks in such a sparsity-promoting space belong to the span of kernel functions centered on at most \begin{document}$ n $\end{document} adaptive points of the input space, where \begin{document}$ n $\end{document} is the number of training data. Under the Lebesgue constant assumptions on reproducing kernels, we can recover the relaxed representer theorem and the exact representer theorem in that space in the literature. Finally, we perform numerical experiments for synthetic data and real-world benchmark data in the reproducing kernel Banach spaces with the \begin{document}$ \ell^1 $\end{document} norm and the reproducing kernel Hilbert spaces both with Laplacian kernels. The numerical performance demonstrates the advantages of sparse regularized learning.

Referees

Librarians

Email Alert

[Back to Top]