# American Institute of Mathematical Sciences

• Previous Article
Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression
• CPAA Home
• This Issue
• Next Article
A numerical method to compute Fisher information for a special case of heterogeneous negative binomial regression
August  2020, 19(8): 4191-4212. doi: 10.3934/cpaa.2020188

## Stochastic AUC optimization with general loss

 1 Department of Mathematics and Statistics, State University of New York at Albany, Albany, NY 12206, USA 2 Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong, China 3 Department of Mathematics, The University of Hong Kong, Hong Kong, China

* Corresponding author

Received  October 2019 Revised  January 2020 Published  May 2020

Fund Project: This work was completed when Wei Shen was a visiting student at SUNY Albany. Yiming Ying is supported by the National Science Foundation (NSF, Grant IIS1816227)

Recently, there is considerable work on developing efficient stochastic optimization algorithms for AUC maximization. However, most of them focus on the least square loss which may be not the best option in practice. The main difficulty for dealing with the general convex loss is the pairwise nonlinearity w.r.t. the sampling distribution generating the data. In this paper, we use Bernstein polynomials to uniformly approximate the general losses which are able to decouple the pairwise nonlinearity. In particular, we show that this reduction for AUC maximization with a general loss is equivalent to a weakly convex (nonconvex) min-max formulation. Then, we develop a novel SGD algorithm for AUC maximization with per-iteration cost linearly w.r.t. the data dimension, making it amenable for streaming data analysis. Despite its non-convexity, we prove its global convergence by exploring the appealing convexity-preserving property of Bernstein polynomials and the intrinsic structure of the min-max formulation. Experiments are performed to validate the effectiveness of the proposed approach.

Citation: Zhenhuan Yang, Wei Shen, Yiming Ying, Xiaoming Yuan. Stochastic AUC optimization with general loss. Communications on Pure & Applied Analysis, 2020, 19 (8) : 4191-4212. doi: 10.3934/cpaa.2020188
##### References:
 [1] F. Bach and E. Moulines, Non-strongly-convex smooth stochastic approximation with convergence rate O (1/n), in Advances in Neural Information Processing Systems, (2013), 773–781. Google Scholar [2] A. P. Bradley, The use of the area under the ROC curve in the evaluation of machine learning algorithms, Pattern Recognit., 30 (1997), 1145-1159.  doi: 10.1016/S0031-3203(96)00142-2.  Google Scholar [3] T. Calders and S. Jaroszewicz, Efficient AUC optimization for classification, in PKDD, Vol. 4702, Springer, (2007), 42–53. Google Scholar [4] C. C. Chang and C. J. Lin, LIBSVM: a library for support vector machines, ACM Trans. Intell. Syst. Technol., 2 (2011), 21 pp. doi: 10.1145/1961189.1961199.  Google Scholar [5] S. Clémençon, G. Lugosi and N. Vayatis, Ranking and empirical minimization of U-statistics, Ann. Statist., 36 (2008), 844-874.  doi: 10.1214/009052607000000910.  Google Scholar [6] C. Cortes and M. Mohri, AUCoptimization vs. error rate minimization, in Advances in Neural Information Processing Systems, (2004), 313–320. Google Scholar [7] D. Davis and D. Drusvyatskiy, Stochastic model-based minimization of weakly convex functions, SIAM J. Optim., 29 (2019), 207-239.  doi: 10.1137/18M1178244.  Google Scholar [8] D. Davis and B. Grimmer, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, SIAM J. Optim., 29 (2019), 1908-1930.  doi: 10.1137/17M1151031.  Google Scholar [9] Dheeru, Dua and Karra Taniskidou, Efi, UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences, 2017. Available from: http://archive.ics.uci.edu/ml. Google Scholar [10] D. Drusvyatskiy, The proximal point method revisited, preprint, arXiv: 1712.06038. Google Scholar [11] T. Fawcett, An introduction to ROC analysis, Pattern Recognit. Lett., 27 (2006), 861-874.   Google Scholar [12] W. Gao, R. Jin, S. Zhu and Z. H. Zhou, One-pass AUC optimization, in International Conference on Machine Learning, (2013), 906–914. Google Scholar [13] W. Gao and Z. H. Zhou, On the Consistency of AUC Pairwise Optimization, in IJCAI, (2015), 939–945. Google Scholar [14] J. A. Hanley and B. J. McNeil, The meaning and use of the area under a receiver operating characteristic (ROC) curve, Radiology, 143 (1982), 29-36.   Google Scholar [15] A. Herschtal and B. Raskutti, Optimising area under the ROC curve using gradient descent, in Proceedings of the 21st International Conference on Machine Learning, ACM, (2004), 49. Google Scholar [16] T. Joachims, A support vector method for multivariate performance measures, in Proceedings of the 22nd International Conference on Machine Learning, ACM, (2005), 377–384. Google Scholar [17] P. Kar, B. Sriperumbudur, P. Jain and H. Karnick, On the generalization ability of online learning algorithms for pairwise loss functions, in International Conference on Machine Learning, (2013), 441–449. Google Scholar [18] S. Lacoste-Julien, M. Schmidt and F. Bach, A simpler approach to obtaining an O (1/t) convergence rate for the projected stochastic subgradient method, preprint, arXiv: 1212.2002. doi: 10.1137/1.9781611974331.ch127.  Google Scholar [19] J. Lin and L. Rosasco, Optimal learning for multi-pass stochastic gradient methods, in Advances in Neural Information Processing Systems, (2016), 4556–4564.  Google Scholar [20] M. Liu, Z. Yuan, Y. Ying and T. Yang, Stochastic AUC Maximization with Deep Neural Networks, in International Conference on Learning Representations (ICLR), 2020. Google Scholar [21] M. Liu, X. Zhang, Z. Chen, X. Wang and T. Yang, Fast stochastic AUC maximization with O (1/n)-convergence rate, in International Conference on Machine Learning, (2018), 3195–3203. Google Scholar [22] M. Natole, Y. Ying and S. Lyu, Stochastic proximal algorithms for AUC maximization, in International Conference on Machine Learning, (2018), 3707–3716. Google Scholar [23] A. Nemirovski, A. Juditsky, G. Lan and A. Shapiro, Robust stochastic approximation approach to stochastic programming, SIAM J. Optim., 19 (2009), 1574-1609.  doi: 10.1137/070704277.  Google Scholar [24] E. A. Nurminskii, The quasigradient method for the solving of the nonlinear programming problems, Cybernetics, 9 (1973), 145-150.   Google Scholar [25] G. M. Phillips, Interpolation and Approximation by Polynomials, Vol. 14, Springer Science & Business Media, 2003. doi: 10.1007/b97417.  Google Scholar [26] M. J. D. Powell, Approximation Theory and Methods, Cambridge University Press, 1981.   Google Scholar [27] H. Rafique, M. Liu, Q. Lin and T. Yang, Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning, preprint, arXiv: 1810.02060. Google Scholar [28] A. Rakhlin, O. Shamir and K. Sridharan, Making gradient descent optimal for strongly convex stochastic optimization, in Proceedings of the 29th International Conference on Machine Learning, (2012), 449–456. Google Scholar [29] O. Shamir and T. Zhang, Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes, in International Conference on Machine Learning, (2013), 71–79. Google Scholar [30] N. Srebro, A. Tewari, Stochastic optimization for machine learning, CML Tutorial, (2010). Google Scholar [31] Y. Wang, R. Khardon, D. Pechyony and R. Jones, Generalization bounds for online learning algorithms with pairwise loss functions, in Conference on Learning Theory, (2012), 13. Google Scholar [32] Y. Ying, L. Wen and S. Lyu, Stochastic online AUC maximization, in Advances in Neural Information Processing Systems, 2016. Google Scholar [33] Y. Ying and D. X. Zhou, Online regularized classification algorithms, IEEE Trans. Inform. Theory, 52 (2006), 4775-4788.  doi: 10.1109/TIT.2006.883632.  Google Scholar [34] Y. Ying and D. X. Zhou, Online pairwise learning algorithms, Neural Comput., 28 (2016), 743-777.  doi: 10.1162/neco_a_00817.  Google Scholar [35] X. Zhang, A. Saha and S. V. N. Vishwanathan, Smoothing multivariate performance measures, J. Mach. Learn. Res., 13 (2012), 3623-3680.   Google Scholar [36] P. Zhao, R. Jin, T. Yang and S. C. Hoi, Online AUC maximization, in Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011. Google Scholar

show all references

##### References:
 [1] F. Bach and E. Moulines, Non-strongly-convex smooth stochastic approximation with convergence rate O (1/n), in Advances in Neural Information Processing Systems, (2013), 773–781. Google Scholar [2] A. P. Bradley, The use of the area under the ROC curve in the evaluation of machine learning algorithms, Pattern Recognit., 30 (1997), 1145-1159.  doi: 10.1016/S0031-3203(96)00142-2.  Google Scholar [3] T. Calders and S. Jaroszewicz, Efficient AUC optimization for classification, in PKDD, Vol. 4702, Springer, (2007), 42–53. Google Scholar [4] C. C. Chang and C. J. Lin, LIBSVM: a library for support vector machines, ACM Trans. Intell. Syst. Technol., 2 (2011), 21 pp. doi: 10.1145/1961189.1961199.  Google Scholar [5] S. Clémençon, G. Lugosi and N. Vayatis, Ranking and empirical minimization of U-statistics, Ann. Statist., 36 (2008), 844-874.  doi: 10.1214/009052607000000910.  Google Scholar [6] C. Cortes and M. Mohri, AUCoptimization vs. error rate minimization, in Advances in Neural Information Processing Systems, (2004), 313–320. Google Scholar [7] D. Davis and D. Drusvyatskiy, Stochastic model-based minimization of weakly convex functions, SIAM J. Optim., 29 (2019), 207-239.  doi: 10.1137/18M1178244.  Google Scholar [8] D. Davis and B. Grimmer, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, SIAM J. Optim., 29 (2019), 1908-1930.  doi: 10.1137/17M1151031.  Google Scholar [9] Dheeru, Dua and Karra Taniskidou, Efi, UCI Machine Learning Repository, University of California, Irvine, School of Information and Computer Sciences, 2017. Available from: http://archive.ics.uci.edu/ml. Google Scholar [10] D. Drusvyatskiy, The proximal point method revisited, preprint, arXiv: 1712.06038. Google Scholar [11] T. Fawcett, An introduction to ROC analysis, Pattern Recognit. Lett., 27 (2006), 861-874.   Google Scholar [12] W. Gao, R. Jin, S. Zhu and Z. H. Zhou, One-pass AUC optimization, in International Conference on Machine Learning, (2013), 906–914. Google Scholar [13] W. Gao and Z. H. Zhou, On the Consistency of AUC Pairwise Optimization, in IJCAI, (2015), 939–945. Google Scholar [14] J. A. Hanley and B. J. McNeil, The meaning and use of the area under a receiver operating characteristic (ROC) curve, Radiology, 143 (1982), 29-36.   Google Scholar [15] A. Herschtal and B. Raskutti, Optimising area under the ROC curve using gradient descent, in Proceedings of the 21st International Conference on Machine Learning, ACM, (2004), 49. Google Scholar [16] T. Joachims, A support vector method for multivariate performance measures, in Proceedings of the 22nd International Conference on Machine Learning, ACM, (2005), 377–384. Google Scholar [17] P. Kar, B. Sriperumbudur, P. Jain and H. Karnick, On the generalization ability of online learning algorithms for pairwise loss functions, in International Conference on Machine Learning, (2013), 441–449. Google Scholar [18] S. Lacoste-Julien, M. Schmidt and F. Bach, A simpler approach to obtaining an O (1/t) convergence rate for the projected stochastic subgradient method, preprint, arXiv: 1212.2002. doi: 10.1137/1.9781611974331.ch127.  Google Scholar [19] J. Lin and L. Rosasco, Optimal learning for multi-pass stochastic gradient methods, in Advances in Neural Information Processing Systems, (2016), 4556–4564.  Google Scholar [20] M. Liu, Z. Yuan, Y. Ying and T. Yang, Stochastic AUC Maximization with Deep Neural Networks, in International Conference on Learning Representations (ICLR), 2020. Google Scholar [21] M. Liu, X. Zhang, Z. Chen, X. Wang and T. Yang, Fast stochastic AUC maximization with O (1/n)-convergence rate, in International Conference on Machine Learning, (2018), 3195–3203. Google Scholar [22] M. Natole, Y. Ying and S. Lyu, Stochastic proximal algorithms for AUC maximization, in International Conference on Machine Learning, (2018), 3707–3716. Google Scholar [23] A. Nemirovski, A. Juditsky, G. Lan and A. Shapiro, Robust stochastic approximation approach to stochastic programming, SIAM J. Optim., 19 (2009), 1574-1609.  doi: 10.1137/070704277.  Google Scholar [24] E. A. Nurminskii, The quasigradient method for the solving of the nonlinear programming problems, Cybernetics, 9 (1973), 145-150.   Google Scholar [25] G. M. Phillips, Interpolation and Approximation by Polynomials, Vol. 14, Springer Science & Business Media, 2003. doi: 10.1007/b97417.  Google Scholar [26] M. J. D. Powell, Approximation Theory and Methods, Cambridge University Press, 1981.   Google Scholar [27] H. Rafique, M. Liu, Q. Lin and T. Yang, Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning, preprint, arXiv: 1810.02060. Google Scholar [28] A. Rakhlin, O. Shamir and K. Sridharan, Making gradient descent optimal for strongly convex stochastic optimization, in Proceedings of the 29th International Conference on Machine Learning, (2012), 449–456. Google Scholar [29] O. Shamir and T. Zhang, Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes, in International Conference on Machine Learning, (2013), 71–79. Google Scholar [30] N. Srebro, A. Tewari, Stochastic optimization for machine learning, CML Tutorial, (2010). Google Scholar [31] Y. Wang, R. Khardon, D. Pechyony and R. Jones, Generalization bounds for online learning algorithms with pairwise loss functions, in Conference on Learning Theory, (2012), 13. Google Scholar [32] Y. Ying, L. Wen and S. Lyu, Stochastic online AUC maximization, in Advances in Neural Information Processing Systems, 2016. Google Scholar [33] Y. Ying and D. X. Zhou, Online regularized classification algorithms, IEEE Trans. Inform. Theory, 52 (2006), 4775-4788.  doi: 10.1109/TIT.2006.883632.  Google Scholar [34] Y. Ying and D. X. Zhou, Online pairwise learning algorithms, Neural Comput., 28 (2016), 743-777.  doi: 10.1162/neco_a_00817.  Google Scholar [35] X. Zhang, A. Saha and S. V. N. Vishwanathan, Smoothing multivariate performance measures, J. Mach. Learn. Res., 13 (2012), 3623-3680.   Google Scholar [36] P. Zhao, R. Jin, T. Yang and S. C. Hoi, Online AUC maximization, in Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011. Google Scholar
Comparison of convergence speed between SAUC-H and $\text{OAM}_{gra}$
Evaluation of AUC scores vesus the degree of the Bernstein polynomial
 Algorithm 1: Stochastic AUC Optimization (SAUC) 1: Input: $R>0$, $\gamma\geq\gamma_0$ and $\beta>0$. 2: Initialize $\bar{{\mathbf{v}}}_0 = 0$ and $\bar{{\mathit{\boldsymbol{\alpha}}}}_0 = 0$. 3: for $t=1$ to $T-1$ do4: Set ${\mathbf{v}}_0^t = \bar{{\mathbf{v}}}_{t-1}, {\mathit{\boldsymbol{\alpha}}}_0^t = \bar{{\mathit{\boldsymbol{\alpha}}}}_{t-1}$ and $\eta_t = \frac{\beta}{\sqrt{t}}.$ 5: for $j=1$ to $t$ do 6: Randomly sample $z_j^t = (x_j^t,y_j^t)$ and compute \begin{align*} &{\mathbf{v}}_{j}^t = {{\bf Proj}}_{{\Omega}_1} \bigl({\mathbf{v}}_{j-1}^t - \eta_t \nabla_{{\mathbf{v}}} \varPhi_{\gamma}^t({\mathbf{v}}_{j-1}^t,{\mathit{\boldsymbol{\alpha}}}_{j-1}^t;z_j^t)\bigr), &{\mathit{\boldsymbol{\alpha}}}_{j}^t = {{\bf Proj}}_{{\Omega}_2} \bigl({\mathit{\boldsymbol{\alpha}}}_{j-1}^t + \eta_t \nabla_{{\mathit{\boldsymbol{\alpha}}}} \varPhi_{\gamma}^t({\mathbf{v}}_{j-1}^t,{\mathit{\boldsymbol{\alpha}}}_{j-1}^t;z_j^t)\bigr) \end{align*} 7: end for 8: Compute $\bar{{\mathbf{v}}}_{t} = \frac{1}{t}\sum_{j=0}^{t-1} {\mathbf{v}}_j^t$ and $\bar{{\mathit{\boldsymbol{\alpha}}}}_{t} = \frac{1}{t}\sum_{j=0}^{t-1} {\mathit{\boldsymbol{\alpha}}}_j^t.$9: end for 10: Output: $\widetilde{{\mathbf{v}}}_T:=\frac{1}{T}\sum_{t=0}^{T-1}\bar{{\mathbf{v}}}_{t}$ and $\widetilde{{\mathit{\boldsymbol{\alpha}}}}_T:=\frac{1}{T}\sum_{t=0}^{T-1}\bar{{\mathit{\boldsymbol{\alpha}}}}_{t}.$
 Algorithm 1: Stochastic AUC Optimization (SAUC) 1: Input: $R>0$, $\gamma\geq\gamma_0$ and $\beta>0$. 2: Initialize $\bar{{\mathbf{v}}}_0 = 0$ and $\bar{{\mathit{\boldsymbol{\alpha}}}}_0 = 0$. 3: for $t=1$ to $T-1$ do4: Set ${\mathbf{v}}_0^t = \bar{{\mathbf{v}}}_{t-1}, {\mathit{\boldsymbol{\alpha}}}_0^t = \bar{{\mathit{\boldsymbol{\alpha}}}}_{t-1}$ and $\eta_t = \frac{\beta}{\sqrt{t}}.$ 5: for $j=1$ to $t$ do 6: Randomly sample $z_j^t = (x_j^t,y_j^t)$ and compute \begin{align*} &{\mathbf{v}}_{j}^t = {{\bf Proj}}_{{\Omega}_1} \bigl({\mathbf{v}}_{j-1}^t - \eta_t \nabla_{{\mathbf{v}}} \varPhi_{\gamma}^t({\mathbf{v}}_{j-1}^t,{\mathit{\boldsymbol{\alpha}}}_{j-1}^t;z_j^t)\bigr), &{\mathit{\boldsymbol{\alpha}}}_{j}^t = {{\bf Proj}}_{{\Omega}_2} \bigl({\mathit{\boldsymbol{\alpha}}}_{j-1}^t + \eta_t \nabla_{{\mathit{\boldsymbol{\alpha}}}} \varPhi_{\gamma}^t({\mathbf{v}}_{j-1}^t,{\mathit{\boldsymbol{\alpha}}}_{j-1}^t;z_j^t)\bigr) \end{align*} 7: end for 8: Compute $\bar{{\mathbf{v}}}_{t} = \frac{1}{t}\sum_{j=0}^{t-1} {\mathbf{v}}_j^t$ and $\bar{{\mathit{\boldsymbol{\alpha}}}}_{t} = \frac{1}{t}\sum_{j=0}^{t-1} {\mathit{\boldsymbol{\alpha}}}_j^t.$9: end for 10: Output: $\widetilde{{\mathbf{v}}}_T:=\frac{1}{T}\sum_{t=0}^{T-1}\bar{{\mathbf{v}}}_{t}$ and $\widetilde{{\mathit{\boldsymbol{\alpha}}}}_T:=\frac{1}{T}\sum_{t=0}^{T-1}\bar{{\mathit{\boldsymbol{\alpha}}}}_{t}.$
Statistics of datasets
Comparison of AUC score (mean$\pm$std) on test data; OPAUC on news20 and sector does not converge in a reasonable time limit. Best AUC value on each dataset is in bold and second is underlined
 [1] John Sebastian Simon, Hirofumi Notsu. A shape optimization problem constrained with the Stokes equations to address maximization of vortices. Evolution Equations & Control Theory, 2022  doi: 10.3934/eect.2022003 [2] Lu Han, Min Li, Dachuan Xu, Dongmei Zhang. Stochastic-Lazier-Greedy Algorithm for monotone non-submodular maximization. Journal of Industrial & Management Optimization, 2021, 17 (5) : 2607-2614. doi: 10.3934/jimo.2020085 [3] David Yang Gao, Changzhi Wu. On the triality theory for a quartic polynomial optimization problem. Journal of Industrial & Management Optimization, 2012, 8 (1) : 229-242. doi: 10.3934/jimo.2012.8.229 [4] Jong Uhn Kim. On the stochastic Burgers equation with a polynomial nonlinearity in the real line. Discrete & Continuous Dynamical Systems - B, 2006, 6 (4) : 835-866. doi: 10.3934/dcdsb.2006.6.835 [5] Shenggui Zhang. A sufficient condition of Euclidean rings given by polynomial optimization over a box. Numerical Algebra, Control & Optimization, 2014, 4 (2) : 93-101. doi: 10.3934/naco.2014.4.93 [6] Reza Kamyar, Matthew M. Peet. Polynomial optimization with applications to stability analysis and control - Alternatives to sum of squares. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2383-2417. doi: 10.3934/dcdsb.2015.20.2383 [7] Qi Zhang, Huaizhong Zhao. Backward doubly stochastic differential equations with polynomial growth coefficients. Discrete & Continuous Dynamical Systems, 2015, 35 (11) : 5285-5315. doi: 10.3934/dcds.2015.35.5285 [8] Mingshang Hu. Stochastic global maximum principle for optimization with recursive utilities. Probability, Uncertainty and Quantitative Risk, 2017, 2 (0) : 1-. doi: 10.1186/s41546-017-0014-7 [9] Feng Bao, Thomas Maier. Stochastic gradient descent algorithm for stochastic optimization in solving analytic continuation problems. Foundations of Data Science, 2020, 2 (1) : 1-17. doi: 10.3934/fods.2020001 [10] Jingzhen Liu, Yike Wang, Ming Zhou. Utility maximization with habit formation of interaction. Journal of Industrial & Management Optimization, 2021, 17 (3) : 1451-1469. doi: 10.3934/jimo.2020029 [11] Lin Zhu, Xinzhen Zhang. Semidefinite relaxation method for polynomial optimization with second-order cone complementarity constraints. Journal of Industrial & Management Optimization, 2021  doi: 10.3934/jimo.2021030 [12] Wei Mao, Liangjian Hu, Xuerong Mao. Razumikhin-type theorems on polynomial stability of hybrid stochastic systems with pantograph delay. Discrete & Continuous Dynamical Systems - B, 2020, 25 (8) : 3217-3232. doi: 10.3934/dcdsb.2020059 [13] Tijana Levajković, Hermann Mena, Amjad Tuffaha. The stochastic linear quadratic optimal control problem in Hilbert spaces: A polynomial chaos approach. Evolution Equations & Control Theory, 2016, 5 (1) : 105-134. doi: 10.3934/eect.2016.5.105 [14] Ardeshir Ahmadi, Hamed Davari-Ardakani. A multistage stochastic programming framework for cardinality constrained portfolio optimization. Numerical Algebra, Control & Optimization, 2017, 7 (3) : 359-377. doi: 10.3934/naco.2017023 [15] Haodong Yu, Jie Sun. Robust stochastic optimization with convex risk measures: A discretized subgradient scheme. Journal of Industrial & Management Optimization, 2021, 17 (1) : 81-99. doi: 10.3934/jimo.2019100 [16] Jean-Paul Arnaout, Georges Arnaout, John El Khoury. Simulation and optimization of ant colony optimization algorithm for the stochastic uncapacitated location-allocation problem. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1215-1225. doi: 10.3934/jimo.2016.12.1215 [17] Xuepeng Zhang, Zhibin Liang. Optimal layer reinsurance on the maximization of the adjustment coefficient. Numerical Algebra, Control & Optimization, 2016, 6 (1) : 21-34. doi: 10.3934/naco.2016.6.21 [18] Nicholas Westray, Harry Zheng. Constrained nonsmooth utility maximization on the positive real line. Mathematical Control & Related Fields, 2015, 5 (3) : 679-695. doi: 10.3934/mcrf.2015.5.679 [19] Diogo A. Gomes, Gabriele Terrone. Bernstein estimates: weakly coupled systems and integral equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 861-883. doi: 10.3934/cpaa.2012.11.861 [20] Nursel Çetin. On complex modified Bernstein-Stancu operators. Mathematical Foundations of Computing, 2022  doi: 10.3934/mfc.2021043

2020 Impact Factor: 1.916

## Metrics

• PDF downloads (272)
• HTML views (85)
• Cited by (0)

## Other articlesby authors

• on AIMS
• on Google Scholar

[Back to Top]