# American Institute of Mathematical Sciences

January  2019, 4: 8 doi: 10.1186/s41546-019-0042-6

## Nonlinear regression without i.i.d. assumption

 UniDT, Shanghai, China

Published  November 2019

In this paper, we consider a class of nonlinear regression problems without the assumption of being independent and identically distributed. We propose a correspondent mini-max problem for nonlinear regression and give a numerical algorithm. Such an algorithm can be applied in regression and machine learning problems, and yields better results than traditional least squares and machine learning methods.
Citation: Qing Xu, Xiaohua (Michael) Xuan. Nonlinear regression without i.i.d. assumption. Probability, Uncertainty and Quantitative Risk, 2019, 4 (0) : 8-. doi: 10.1186/s41546-019-0042-6
##### References:
 [1] Ben-Israel, A. and T.N.E. Greville. (2003). Generalized inverses:Theory and applications (2nd ed.), Springer, New York. [2] Boyd, S., N. Parikh, E. Chu, B. Peleato, and J. Eckstein. (2010). Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Found. Trends Mach. Learn. 3, 1-122. [3] Boyd, S. and L. Vandenberghe. (2004). Convex Optimization, Cambridge University Press. https://doi.org/10.1017/cbo9780511804441.005. [4] Demyanov, V.F. and V.N. Malozemov. (1977). Introduction to Minimax, Wiley, New York. [5] Jin, H. and S. Peng. (2016). Optimal Unbiased Estimation for Maximal Distribution. https://arxiv.org/abs/1611.07994. [6] Kellogg, R.B. (1969). Nonlinear alternating direction algorithm, Math. Comp. 23, 23-38. [7] Kendall, M.G. and A. Stuart. (1968). The Advanced Theory of Statistics, Volume 3:Design and Analysis, and Time-Series (2nd ed.), Griffin, London. [8] Kiwiel, K.C. (1987). A Direct Method of Linearization for Continuous Minimax Problems, J. Optim. Theory Appl. 55, 271-287. [9] Klessig, R. and E. Polak. (1973). An Adaptive Precision Gradient Method for Optimal Control, SIAM J. Control 11, 80-93. [10] Legendre, A.-M. (1805). Nouvelles methodes pour la determination des orbites des cometes, F. Didot, Paris. [11] Lin, L., Y. Shi, X. Wang, and S. Yang. (2016). k-sample upper expectation linear regression-Modeling, identifiability, estimation and prediction, J. Stat. Plan. Infer. 170, 15-26. [12] Lin, L., P. Dong, Y. Song, and L. Zhu. (2017a). Upper Expectation Parametric Regression, Stat. Sin. 27, 1265-1280. [13] Lin, L., Y.X. Liu, and C. Lin. (2017b). Mini-max-risk and mini-mean-risk inferences for a partially piecewise regression, Statistics 51, 745-765. [14] Nocedal, J. and S.J. Wright. (2006). Numerical Optimization, Second Edition, Springer, New York. [15] Panin, V.M. (1981). Linearization Method for Continuous Min-max Problems, Kibernetika 2, 75-78. [16] Peng, S. (2005). Nonlinear expectations and nonlinear Markov chains, Chin. Ann. Math. 26B, no. 2, 159-184. [17] Seber, G.A.F. and C.J. Wild. (1989). Nonlinear Regression, Wiley, New York.

show all references

##### References:
 [1] Ben-Israel, A. and T.N.E. Greville. (2003). Generalized inverses:Theory and applications (2nd ed.), Springer, New York. [2] Boyd, S., N. Parikh, E. Chu, B. Peleato, and J. Eckstein. (2010). Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers, Found. Trends Mach. Learn. 3, 1-122. [3] Boyd, S. and L. Vandenberghe. (2004). Convex Optimization, Cambridge University Press. https://doi.org/10.1017/cbo9780511804441.005. [4] Demyanov, V.F. and V.N. Malozemov. (1977). Introduction to Minimax, Wiley, New York. [5] Jin, H. and S. Peng. (2016). Optimal Unbiased Estimation for Maximal Distribution. https://arxiv.org/abs/1611.07994. [6] Kellogg, R.B. (1969). Nonlinear alternating direction algorithm, Math. Comp. 23, 23-38. [7] Kendall, M.G. and A. Stuart. (1968). The Advanced Theory of Statistics, Volume 3:Design and Analysis, and Time-Series (2nd ed.), Griffin, London. [8] Kiwiel, K.C. (1987). A Direct Method of Linearization for Continuous Minimax Problems, J. Optim. Theory Appl. 55, 271-287. [9] Klessig, R. and E. Polak. (1973). An Adaptive Precision Gradient Method for Optimal Control, SIAM J. Control 11, 80-93. [10] Legendre, A.-M. (1805). Nouvelles methodes pour la determination des orbites des cometes, F. Didot, Paris. [11] Lin, L., Y. Shi, X. Wang, and S. Yang. (2016). k-sample upper expectation linear regression-Modeling, identifiability, estimation and prediction, J. Stat. Plan. Infer. 170, 15-26. [12] Lin, L., P. Dong, Y. Song, and L. Zhu. (2017a). Upper Expectation Parametric Regression, Stat. Sin. 27, 1265-1280. [13] Lin, L., Y.X. Liu, and C. Lin. (2017b). Mini-max-risk and mini-mean-risk inferences for a partially piecewise regression, Statistics 51, 745-765. [14] Nocedal, J. and S.J. Wright. (2006). Numerical Optimization, Second Edition, Springer, New York. [15] Panin, V.M. (1981). Linearization Method for Continuous Min-max Problems, Kibernetika 2, 75-78. [16] Peng, S. (2005). Nonlinear expectations and nonlinear Markov chains, Chin. Ann. Math. 26B, no. 2, 159-184. [17] Seber, G.A.F. and C.J. Wild. (1989). Nonlinear Regression, Wiley, New York.
 [1] Lucian Coroianu, Danilo Costarelli, Sorin G. Gal, Gianluca Vinti. Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4213-4225. doi: 10.3934/cpaa.2020189 [2] Shuhua Wang, Zhenlong Chen, Baohuai Sheng. Convergence of online pairwise regression learning with quadratic loss. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4023-4054. doi: 10.3934/cpaa.2020178 [3] Jiang Xie, Junfu Xu, Celine Nie, Qing Nie. Machine learning of swimming data via wisdom of crowd and regression analysis. Mathematical Biosciences & Engineering, 2017, 14 (2) : 511-527. doi: 10.3934/mbe.2017031 [4] Ya-Xiang Yuan. Recent advances in numerical methods for nonlinear equations and nonlinear least squares. Numerical Algebra, Control and Optimization, 2011, 1 (1) : 15-34. doi: 10.3934/naco.2011.1.15 [5] Hassan Mohammad, Mohammed Yusuf Waziri, Sandra Augusta Santos. A brief survey of methods for solving nonlinear least-squares problems. Numerical Algebra, Control and Optimization, 2019, 9 (1) : 1-13. doi: 10.3934/naco.2019001 [6] Yunhai Xiao, Soon-Yi Wu, Bing-Sheng He. A proximal alternating direction method for $\ell_{2,1}$-norm least squares problem in multi-task feature learning. Journal of Industrial and Management Optimization, 2012, 8 (4) : 1057-1069. doi: 10.3934/jimo.2012.8.1057 [7] Yanfei Lu, Qingfei Yin, Hongyi Li, Hongli Sun, Yunlei Yang, Muzhou Hou. Solving higher order nonlinear ordinary differential equations with least squares support vector machines. Journal of Industrial and Management Optimization, 2020, 16 (3) : 1481-1502. doi: 10.3934/jimo.2019012 [8] Zhuoyi Xu, Yong Xia, Deren Han. On box-constrained total least squares problem. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 439-449. doi: 10.3934/naco.2020043 [9] Xiao-Wen Chang, David Titley-Peloquin. An improved algorithm for generalized least squares estimation. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 451-461. doi: 10.3934/naco.2020044 [10] Mahdi Roozbeh, Saman Babaie–Kafaki, Zohre Aminifard. Two penalized mixed–integer nonlinear programming approaches to tackle multicollinearity and outliers effects in linear regression models. Journal of Industrial and Management Optimization, 2021, 17 (6) : 3475-3491. doi: 10.3934/jimo.2020128 [11] Mila Nikolova. Analytical bounds on the minimizers of (nonconvex) regularized least-squares. Inverse Problems and Imaging, 2008, 2 (1) : 133-149. doi: 10.3934/ipi.2008.2.133 [12] Yanyan Hu, Fubao Xi, Min Zhu. Least squares estimation for distribution-dependent stochastic differential delay equations. Communications on Pure and Applied Analysis, 2022, 21 (4) : 1505-1536. doi: 10.3934/cpaa.2022027 [13] Wei Xue, Wensheng Zhang, Gaohang Yu. Least absolute deviations learning of multiple tasks. Journal of Industrial and Management Optimization, 2018, 14 (2) : 719-729. doi: 10.3934/jimo.2017071 [14] Émilie Chouzenoux, Henri Gérard, Jean-Christophe Pesquet. General risk measures for robust machine learning. Foundations of Data Science, 2019, 1 (3) : 249-269. doi: 10.3934/fods.2019011 [15] Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics and Games, 2021, 8 (3) : 203-231. doi: 10.3934/jdg.2021008 [16] Mohamed A. Tawhid, Ahmed F. Ali. A simplex grey wolf optimizer for solving integer programming and minimax problems. Numerical Algebra, Control and Optimization, 2017, 7 (3) : 301-323. doi: 10.3934/naco.2017020 [17] Xian-Jun Long, Jing Quan. Optimality conditions and duality for minimax fractional programming involving nonsmooth generalized univexity. Numerical Algebra, Control and Optimization, 2011, 1 (3) : 361-370. doi: 10.3934/naco.2011.1.361 [18] Xiao-Bing Li, Qi-Lin Wang, Zhi Lin. Optimality conditions and duality for minimax fractional programming problems with data uncertainty. Journal of Industrial and Management Optimization, 2019, 15 (3) : 1133-1151. doi: 10.3934/jimo.2018089 [19] Baohuai Sheng, Huanxiang Liu, Huimin Wang. Learning rates for the kernel regularized regression with a differentiable strongly convex loss. Communications on Pure and Applied Analysis, 2020, 19 (8) : 3973-4005. doi: 10.3934/cpaa.2020176 [20] Suxiang He, Yunyun Nie. A class of nonlinear Lagrangian algorithms for minimax problems. Journal of Industrial and Management Optimization, 2013, 9 (1) : 75-97. doi: 10.3934/jimo.2013.9.75

Impact Factor: