All Issues

Volume 13, 2017

Volume 12, 2016

Volume 11, 2015

Volume 10, 2014

Volume 9, 2013

Volume 8, 2012

Volume 7, 2011

Volume 6, 2010

Volume 5, 2009

Volume 4, 2008

Volume 3, 2007

Volume 2, 2006

Volume 1, 2005

Journal of Industrial & Management Optimization

2005 , Volume 1 , Issue 2

Select all articles


Biography of Professor Jiye Han
Xiaodong Hu
2005, 1(2): 149-152 doi: 10.3934/jimo.2005.1.149 +[Abstract](91) +[PDF](111.8KB)
Professor Jiye Han was born in Tianjin, China, in 1935. He is currently serving as a research fellow and PhD supervisor of the Institute of Applied Mathematics, Chinese Academy of Sciences (CAS). He is also a guest professor of Applied Mathematics of Tsinghua University and Dalian University of Technology. Professor Han is the associate editor-in-chief of Acta Mathematicae Applicatae Sinica (Chinese edition) and an executive editor of Acta Mathematicae Applicatae Sinica (English edition), Acta Mathematicae Sinica (Chinese edition), Operations Research Transactions, and Chinese Journal of Operations Research and Management Science. In addition, Professor Han is the President of Mathematical Programming Sub-society of the Operations Research Society of China.

For the full paper, please click the "Full Text" button above.
A smoothing Newton algorithm for mathematical programs with complementarity constraints
Zheng-Hai Huang and Jie Sun
2005, 1(2): 153-170 doi: 10.3934/jimo.2005.1.153 +[Abstract](90) +[PDF](241.8KB)
We propose a smoothing Newton algorithm for solving mathematical programs with complementarity constraints (MPCCs). Under some reasonable conditions, the proposed algorithm is shown to be globally convergent and to generate a $B$-stationary point of the MPCC. Preliminary numerical results on some MacMPEC problems are reported.
trust region method for nonsmooth convex optimization
Nobuko Sagara and Masao Fukushima
2005, 1(2): 171-180 doi: 10.3934/jimo.2005.1.171 +[Abstract](76) +[PDF](193.6KB)
We propose an iterative method that solves a nonsmooth convex optimization problem by converting the original objective function to a once continuously differentiable function by way of Moreau-Yosida regularization. The proposed method makes use of approximate function and gradient values of the Moreau-Yosida regularization instead of the corresponding exact values. Under this setting, Fukushima and Qi (1996) and Rauf and Fukushima (2000) proposed a proximal Newton method and a proximal BFGS method, respectively, for nonsmooth convex optimization. While these methods employ a line search strategy to achieve global convergence, the method proposed in this paper uses a trust region strategy. We establish global and superlinear convergence of the method under appropriate assumptions.
Analysis of monotone gradient methods
Yuhong Dai and Ya-xiang Yuan
2005, 1(2): 181-192 doi: 10.3934/jimo.2005.1.181 +[Abstract](143) +[PDF](309.7KB)
The gradient method is one simple method in nonlinear optimization. In this paper, we give a brief review on monotone gradient methods and study their numerical properties by introducing a new technique of long-term observation. We find that, one monotone gradient algorithm which is proposed by Yuan recently shares with the Barzilai-Borwein (BB) method the property that the gradient components with respect to the eigenvectors of the function Hessian are decreasing together. This might partly explain why this algorithm by Yuan is comparable to the BB method in practice. Some examples are also provided showing that the alternate minimization algorithm and the other algorithm by Yuan may fall into cycles. Some more efficient gradient algorithms are provided. Particularly, one of them is monotone and performs better than the BB method in the quadratic case.
Convergence property of the Fletcher-Reeves conjugate gradient method with errors
C.Y. Wang and M.X. Li
2005, 1(2): 193-200 doi: 10.3934/jimo.2005.1.193 +[Abstract](72) +[PDF](178.9KB)
In this paper, we consider a new kind of Fletcher-Reeves (abbr. FR) conjugate gradient method with errors, which is broadly applied in neural network training. Its iterate formula is $x_{k+1}=x_{k}+\alpha_{k}(s_{k}+\omega_{k})$, where the main direction $s_{k}$ is obtained by FR conjugate gradient method and $\omega_{k}$ is accumulative error. The global convergence property of the method is proved under the mild assumption conditions.
An iterative method for general variational inequalities
Hongxia Yin
2005, 1(2): 201-209 doi: 10.3934/jimo.2005.1.201 +[Abstract](73) +[PDF](167.5KB)
Motivated by the observation that some reformulation based extragradient methods for general monotone variational inequalities in real Hilbert space may not generate a solution of the original problem, we propose an iterative method with line searches and prove its convergence for general pseudomonotone (monotone) variational inequality problems.
The revisit of a projection algorithm with variable steps for variational inequalities
Qingzhi Yang
2005, 1(2): 211-217 doi: 10.3934/jimo.2005.1.211 +[Abstract](113) +[PDF](164.5KB)
The projection-type methods are a class of important methods for solving variational inequalities(VI). This paper presents a new treatment to a classical projection algorithm with variable steps, which was first proposed by Auslender in 1970s and later was developed by Fukushima in 1980s. The main purpose of this work is to weaken the assumption conditions while the convergence of original method is still valid.
A discretization based smoothing method for solving semi-infinite variational inequalities
Burcu Özçam and Hao Cheng
2005, 1(2): 219-233 doi: 10.3934/jimo.2005.1.219 +[Abstract](82) +[PDF](234.9KB)
We propose a new smoothing technique based on discretization to solve semi-infinite variational inequalities. The proposed algorithm is tested by both linear and nonlinear problems and proven to be efficient.
A smoothing projected Newton-type method for semismooth equations with bound constraints
Xiaojiao Tong and Shuzi Zhou
2005, 1(2): 235-250 doi: 10.3934/jimo.2005.1.235 +[Abstract](103) +[PDF](223.7KB)
This paper develops a smoothing algorithm to solve a system of constrained equations. Compared with the traditional methods, the new method does not need the continuous differentiability assumption for the corresponding merit function. By using some perturbing technique and suitable strategy of the chosen search direction, we find that the new method not only keeps the strict positivity of the smoothing variable at any non-stationary point of the corresponding optimization problem, but also enjoys global convergence and locally superliner convergence. The former character is the key requirement for smoothing methods. Some numerical examples arising from semi-infinite programming (SIP) show that the new algorithm is promising.
A network simplex algorithm for simple manufacturing network model
Jiangtao Mo, Liqun Qi and Zengxin Wei
2005, 1(2): 251-273 doi: 10.3934/jimo.2005.1.251 +[Abstract](107) +[PDF](305.3KB)
In this paper, we propose a network model called simple manufacturing network. Our model is a combined version of the ordinary multicommodity network and the manufacturing network flow model. It can be used to characterize the complicated manufacturing scenarios. By formulating the model as a minimum cost flow problem plus several bounded variables, we present a modified network simplex method, which exploits the special structure of the model and can perform the computation on the network. A numerical example is provided for illustrating our method.

2016  Impact Factor: 0.994




Email Alert

[Back to Top]