
ISSN:
1547-5816
eISSN:
1553-166X
All Issues
Journal of Industrial & Management Optimization
April 2009 , Volume 5 , Issue 2
Special issue dedicated to the XIII Latin-Ibero-American Conference on Operations Research (CLAIO 2006)
Select all articles
Export/Reference:
2009, 5(2): i-ii
doi: 10.3934/jimo.2009.5.2i
+[Abstract](1582)
+[PDF](35.7KB)
Abstract:
This special issue is devoted to showcase selected papers from the XIII CLAIO (Conferencia Latino-Ibero-Americana de Investigación Operativa, the Latin-Ibero- American Conference on Operations Research). This event is the flagship conference of ALIO, the Latin-Ibero-American Operations Research Association. The CLAIO, which is held biennially, is the main meeting point of the Latin-American Operations Research academic community. The event also fosters and features the participation of researchers from all over the world.
The papers presented in this special issue cover part of the large diversity of subjects covered at CLAIO, ranging from theoretical to applied viewpoints. Over 350 extended abstracts were accepted to the conference, providing novel methodological contributions, models, and algorithms, as well as case studies that illustrate the application of optimization in real-life settings. We are pleased to introduce here a representative set of papers, arising from our selection and the refereeing process.
For more information please click the “Full Text” above.
This special issue is devoted to showcase selected papers from the XIII CLAIO (Conferencia Latino-Ibero-Americana de Investigación Operativa, the Latin-Ibero- American Conference on Operations Research). This event is the flagship conference of ALIO, the Latin-Ibero-American Operations Research Association. The CLAIO, which is held biennially, is the main meeting point of the Latin-American Operations Research academic community. The event also fosters and features the participation of researchers from all over the world.
The papers presented in this special issue cover part of the large diversity of subjects covered at CLAIO, ranging from theoretical to applied viewpoints. Over 350 extended abstracts were accepted to the conference, providing novel methodological contributions, models, and algorithms, as well as case studies that illustrate the application of optimization in real-life settings. We are pleased to introduce here a representative set of papers, arising from our selection and the refereeing process.
For more information please click the “Full Text” above.
2009, 5(2): 175-191
doi: 10.3934/jimo.2009.5.175
+[Abstract](2187)
+[PDF](250.8KB)
Abstract:
The aim of this paper is to extend the applicability of an algorithm for solving inconsistent linear systems to the rank-deficient case, by employing incomplete projections onto the set of solutions of the augmented system $ Ax-r=b$. The extended algorithm converges to the unique minimal norm solution of the least squares solutions. For that purpose, incomplete oblique projections are used, defined by means of matrices that penalize the norm of the residuals. The theoretical properties of the new algorithm are analyzed, and numerical experiences are presented comparing its performance with some well-known projection methods.
The aim of this paper is to extend the applicability of an algorithm for solving inconsistent linear systems to the rank-deficient case, by employing incomplete projections onto the set of solutions of the augmented system $ Ax-r=b$. The extended algorithm converges to the unique minimal norm solution of the least squares solutions. For that purpose, incomplete oblique projections are used, defined by means of matrices that penalize the norm of the residuals. The theoretical properties of the new algorithm are analyzed, and numerical experiences are presented comparing its performance with some well-known projection methods.
2009, 5(2): 193-216
doi: 10.3934/jimo.2009.5.193
+[Abstract](1626)
+[PDF](2497.8KB)
Abstract:
This paper introduces a study with neighborhood search algorithms to deal with unconstrained multiobjective permutation problems. Filter-and-fan/path relinking approach designed by us, and the stochastic local search (SLS) developed by Paquete and Stutzle [22], implemented by us, are compared using as study cases the bi-objective quadratic assignment problem, and the bi-objective travelling salesman problem. Our approach is also compared with results published for bi-objective quadratic assignment problem, bi-objective flow shop problem, bi-objective and tri-objective travelling salesman problems. The results obtained show that the filter-and-fan/path relinking approach seems to be promising to tackle multiobjective permutation problems, achieving good and wide distributed approximations to the Pareto-optimal front.
This paper introduces a study with neighborhood search algorithms to deal with unconstrained multiobjective permutation problems. Filter-and-fan/path relinking approach designed by us, and the stochastic local search (SLS) developed by Paquete and Stutzle [22], implemented by us, are compared using as study cases the bi-objective quadratic assignment problem, and the bi-objective travelling salesman problem. Our approach is also compared with results published for bi-objective quadratic assignment problem, bi-objective flow shop problem, bi-objective and tri-objective travelling salesman problems. The results obtained show that the filter-and-fan/path relinking approach seems to be promising to tackle multiobjective permutation problems, achieving good and wide distributed approximations to the Pareto-optimal front.
2009, 5(2): 217-238
doi: 10.3934/jimo.2009.5.217
+[Abstract](2117)
+[PDF](313.4KB)
Abstract:
We address the economic lot-sizing problem with remanufacturing and final disposal options, known as the ELSR. For this lot-sizing problem, the demand can be also satisfied by remanufacturing used items returned to origin. In this paper, we suggest and evaluate a set of inventory policies designed specially for the ELSR. They are based on a decomposition analysis of the problem into their activities and in the key role that remanufacturing plays in the resolution of the ELSR. We introduce the concepts of useful returns and useful remanufacturing. These concepts allow us to obtain a remanufacturing plan that tries to maximize the use of returns as well as minimize all of the involved costs, thereby simultaneously fulfilling both ecological and economic goals. In addition to these policies, we propose a basic Tabu Search procedure with the aim of finding a near optimal solution of the ELSR. All the solving methods proposed are time-effective. Based on the results obtained from numerical experimentation, we conclude that at least one of them is also cost-effective for a given instance of the problem.
We address the economic lot-sizing problem with remanufacturing and final disposal options, known as the ELSR. For this lot-sizing problem, the demand can be also satisfied by remanufacturing used items returned to origin. In this paper, we suggest and evaluate a set of inventory policies designed specially for the ELSR. They are based on a decomposition analysis of the problem into their activities and in the key role that remanufacturing plays in the resolution of the ELSR. We introduce the concepts of useful returns and useful remanufacturing. These concepts allow us to obtain a remanufacturing plan that tries to maximize the use of returns as well as minimize all of the involved costs, thereby simultaneously fulfilling both ecological and economic goals. In addition to these policies, we propose a basic Tabu Search procedure with the aim of finding a near optimal solution of the ELSR. All the solving methods proposed are time-effective. Based on the results obtained from numerical experimentation, we conclude that at least one of them is also cost-effective for a given instance of the problem.
2009, 5(2): 239-252
doi: 10.3934/jimo.2009.5.239
+[Abstract](1888)
+[PDF](177.9KB)
Abstract:
The logistic regression model is a powerful method for modeling the relationship between a categorical variable and a set of explanatory variables. In practice, however, the existence of maximum likelihood estimates is known to be dependent on the data configuration. In fact, the Maximum Likelihood Estimators (MLE) of unknown parameters exists if, and only if, there is data overlapping. The Hidden Logistic Regression (HLR) is an alternative model under which the observed response is related to the unobservable response. The Maximum Estimated Likelihood (MEL) method is also proposed, once it is immune to the complete or quasi-complete separation of data. The Principal Component Logistic Regression (PCLR) model is useful to reduce the number of dimensions of a logistic regression model with continuous covariates avoiding multicollinearity. In this paper we present an extension of the HLR and PCLR models as means for the solution of problems with polytomous responses. The main purpose is to compare the classificatory performance obtained by the models mentioned above with those of the Classical Logistic Regression (CLR) and Individualized Logistic regression (ILR) models, in the case of polytomous responses. The purpose is to propose an alternative approach for the parameter estimation problem in polytomous logistic models when the data groups are completely separated. Simulations results resulting from databases taken from the literature show that the proposed approach is feasible.
The logistic regression model is a powerful method for modeling the relationship between a categorical variable and a set of explanatory variables. In practice, however, the existence of maximum likelihood estimates is known to be dependent on the data configuration. In fact, the Maximum Likelihood Estimators (MLE) of unknown parameters exists if, and only if, there is data overlapping. The Hidden Logistic Regression (HLR) is an alternative model under which the observed response is related to the unobservable response. The Maximum Estimated Likelihood (MEL) method is also proposed, once it is immune to the complete or quasi-complete separation of data. The Principal Component Logistic Regression (PCLR) model is useful to reduce the number of dimensions of a logistic regression model with continuous covariates avoiding multicollinearity. In this paper we present an extension of the HLR and PCLR models as means for the solution of problems with polytomous responses. The main purpose is to compare the classificatory performance obtained by the models mentioned above with those of the Classical Logistic Regression (CLR) and Individualized Logistic regression (ILR) models, in the case of polytomous responses. The purpose is to propose an alternative approach for the parameter estimation problem in polytomous logistic models when the data groups are completely separated. Simulations results resulting from databases taken from the literature show that the proposed approach is feasible.
2009, 5(2): 253-274
doi: 10.3934/jimo.2009.5.253
+[Abstract](2146)
+[PDF](243.9KB)
Abstract:
In those countries where the electric production chain has been disintegrated, transportation utilities (transmission as well as distribution) are subject to regulation or are under supervised activity. Regulation of activities considered as natural monopolies in network economies requires knowing or assessing efficient capital and operating expenditures (CAPEX and OPEX) depending on the output levels. The problem of determining the tariffs that transmission utilities can charge is tightly related to efficient OPEX levels. This work discusses the use of Data Envelopment Analysis and Stochastic Frontier Analysis to determine efficient frontiers for the electricity transmission activity. The results include relative efficiency and productivity indexes, as well as benchmarking peers. This information can be useful to the regulator to foster efficient performance of transmission utilities.
In those countries where the electric production chain has been disintegrated, transportation utilities (transmission as well as distribution) are subject to regulation or are under supervised activity. Regulation of activities considered as natural monopolies in network economies requires knowing or assessing efficient capital and operating expenditures (CAPEX and OPEX) depending on the output levels. The problem of determining the tariffs that transmission utilities can charge is tightly related to efficient OPEX levels. This work discusses the use of Data Envelopment Analysis and Stochastic Frontier Analysis to determine efficient frontiers for the electricity transmission activity. The results include relative efficiency and productivity indexes, as well as benchmarking peers. This information can be useful to the regulator to foster efficient performance of transmission utilities.
2009, 5(2): 275-283
doi: 10.3934/jimo.2009.5.275
+[Abstract](1809)
+[PDF](133.4KB)
Abstract:
Roadway design usually involves decisions regarding the grade selection as the first stage; consequently, it is followed by another stage to solve the resulting earthwork allocation problem. Researchers have resorted to linear programming to solve the earthwork allocation problem using piecewise linear segments to model the road profile. Non linear functions were used to resolve the issue of sharp connectivity points present at the piecewise linear models. However, scaling problem may arise in the computational phase.
In this paper, a one-dimensional (univariate) spline (piecewise polynomials) is used to fit the road profile and solve both the roadway grade selection and the earthwork allocation problem in a single linear programming problem. The mathematical model is purely linear in nature, regardless of the type of spline function used; and it guarantees global optimality. This approach has resolved the scaling problem while preserving the flexibility and smoothness of the road (no sharp connectivity points). The proposed model has exceeded all previous models in terms of efficiency and savings in cost. For illustration, three cases are considered.
Roadway design usually involves decisions regarding the grade selection as the first stage; consequently, it is followed by another stage to solve the resulting earthwork allocation problem. Researchers have resorted to linear programming to solve the earthwork allocation problem using piecewise linear segments to model the road profile. Non linear functions were used to resolve the issue of sharp connectivity points present at the piecewise linear models. However, scaling problem may arise in the computational phase.
In this paper, a one-dimensional (univariate) spline (piecewise polynomials) is used to fit the road profile and solve both the roadway grade selection and the earthwork allocation problem in a single linear programming problem. The mathematical model is purely linear in nature, regardless of the type of spline function used; and it guarantees global optimality. This approach has resolved the scaling problem while preserving the flexibility and smoothness of the road (no sharp connectivity points). The proposed model has exceeded all previous models in terms of efficiency and savings in cost. For illustration, three cases are considered.
2009, 5(2): 285-302
doi: 10.3934/jimo.2009.5.285
+[Abstract](2023)
+[PDF](515.2KB)
Abstract:
Productivity of a sea port depends, in part, on stacking cranes working in blocks of its storage yard. Each container leaving a block must be moved by a storage-yard crane to a buffer zone during a specific time window so it can reach its destination on time. Containers entering a block for storage must be moved out of the buffer zone sufficiently soon to avoid overflow. In this paper, we formulate integer linear programs to prescribe movements to transport and stack containers in storage yards using one and two equally-sized Automated Stacking Cranes (ASCs) working with straddle carriers. Using real world data, we construct test problems varying both the number of container bays and fullness of the block. We find that one ASC working alone requires up to 70% more time than two ASCs working together to accomplish the same container movements. Optimal solutions of the integer linear programs are typically obtained in only a few seconds.
Productivity of a sea port depends, in part, on stacking cranes working in blocks of its storage yard. Each container leaving a block must be moved by a storage-yard crane to a buffer zone during a specific time window so it can reach its destination on time. Containers entering a block for storage must be moved out of the buffer zone sufficiently soon to avoid overflow. In this paper, we formulate integer linear programs to prescribe movements to transport and stack containers in storage yards using one and two equally-sized Automated Stacking Cranes (ASCs) working with straddle carriers. Using real world data, we construct test problems varying both the number of container bays and fullness of the block. We find that one ASC working alone requires up to 70% more time than two ASCs working together to accomplish the same container movements. Optimal solutions of the integer linear programs are typically obtained in only a few seconds.
2009, 5(2): 303-317
doi: 10.3934/jimo.2009.5.303
+[Abstract](1848)
+[PDF](229.3KB)
Abstract:
The lack of integration among the factors involved in the wood sawing planning process creates a gap in the production system and, thus, technical-economic inefficiency in these industries. In this paper we propose a Lexicographic Goal Programming model that enables the consideration of several criteria sequentially; first, production, as traditionally carried out, then performance, and finally appropriate use of the levels of warehoused logs. This multi-objective model is applied to a Cuban sawmill and a metaheuristic method is developed and implemented in order to solve it.
The lack of integration among the factors involved in the wood sawing planning process creates a gap in the production system and, thus, technical-economic inefficiency in these industries. In this paper we propose a Lexicographic Goal Programming model that enables the consideration of several criteria sequentially; first, production, as traditionally carried out, then performance, and finally appropriate use of the levels of warehoused logs. This multi-objective model is applied to a Cuban sawmill and a metaheuristic method is developed and implemented in order to solve it.
2009, 5(2): 319-339
doi: 10.3934/jimo.2009.5.319
+[Abstract](2383)
+[PDF](264.9KB)
Abstract:
We consider an inverse problem raised from the semi-definite quadratic programming (SDQP) problem. In the inverse problem, the parameters in the objective function of a given SDQP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semi-definite cone constraint and its dual is a linearly positive semi-definite cone constrained semismoothly differentiable ($\mbox{SC}^1$) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to $1/t$, and the rate of multiplier iterates is proportional to $1/\sqrt{t}$, where $t$ is the penalty parameter in the augmented Lagrangian. The numerical results are reported to show the effectiveness of the augmented Lagrangian method for solving the inverse semi-definite quadratic programming problem.
We consider an inverse problem raised from the semi-definite quadratic programming (SDQP) problem. In the inverse problem, the parameters in the objective function of a given SDQP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semi-definite cone constraint and its dual is a linearly positive semi-definite cone constrained semismoothly differentiable ($\mbox{SC}^1$) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to $1/t$, and the rate of multiplier iterates is proportional to $1/\sqrt{t}$, where $t$ is the penalty parameter in the augmented Lagrangian. The numerical results are reported to show the effectiveness of the augmented Lagrangian method for solving the inverse semi-definite quadratic programming problem.
2009, 5(2): 341-349
doi: 10.3934/jimo.2009.5.341
+[Abstract](1889)
+[PDF](154.0KB)
Abstract:
In this paper, we introduce the concept of well-posedness for the vector quasi-equilibrium problem. We obtain some necessary and sufficient conditions for well-posedness of vector quasi-equilibrium problems. As applications, we investigate the well-posedness for vector quasi-variational inequality problems and vector quasi-optimization problems.
In this paper, we introduce the concept of well-posedness for the vector quasi-equilibrium problem. We obtain some necessary and sufficient conditions for well-posedness of vector quasi-equilibrium problems. As applications, we investigate the well-posedness for vector quasi-variational inequality problems and vector quasi-optimization problems.
2009, 5(2): 351-361
doi: 10.3934/jimo.2009.5.351
+[Abstract](1881)
+[PDF](180.5KB)
Abstract:
Traditional univariate cubic spline models assume that the position and function value of each knot are given precisely. It has been observed that errors in data could result in significant fluctuations of the resulting spline. To handle situations that involve uncertainty only in measurements of function values, the concept of a robust spline has been developed in the literature. We propose a more general concept of a PH-robust cubic spline that takes into account also uncertainty in positions of measurements (knots or boundary points) using the paradigm of robust optimization. This bridges the robustness concepts developed in the interpolation/approximation and the optimization communities. Our model handles the case of "coordinated" variations of positions of measurements. It is formulated as a semi-infinite convex optimization problem. We develop a reformulation of the model as a finite explicit convex optimization problem, which makes it possible to use standard convex optimization algorithms for computation.
Traditional univariate cubic spline models assume that the position and function value of each knot are given precisely. It has been observed that errors in data could result in significant fluctuations of the resulting spline. To handle situations that involve uncertainty only in measurements of function values, the concept of a robust spline has been developed in the literature. We propose a more general concept of a PH-robust cubic spline that takes into account also uncertainty in positions of measurements (knots or boundary points) using the paradigm of robust optimization. This bridges the robustness concepts developed in the interpolation/approximation and the optimization communities. Our model handles the case of "coordinated" variations of positions of measurements. It is formulated as a semi-infinite convex optimization problem. We develop a reformulation of the model as a finite explicit convex optimization problem, which makes it possible to use standard convex optimization algorithms for computation.
2009, 5(2): 363-379
doi: 10.3934/jimo.2009.5.363
+[Abstract](1921)
+[PDF](235.9KB)
Abstract:
In the modeling of competition on networks it is usually assumed that users either behave following the Wardropian user equilibrium or the system optimum concept. Nevertheless, in several equilibrium situations, for instance in urban traffic flows, intercity freight flows and telecommunication networks, a mixed behavior is observed. This paper presents a time-dependent network-based model shared by two types of users: generalized Nash players and user equilibrium players. Generalized Nash players have a significant impact on the load of the network, whereas user equilibrium players have a negligible impact. Both classes of players choose the paths to send their flows so as to minimize their own costs, but they apply different optimization criteria. Players interact via some implicit balance constraints which depend on the equilibrium solution. Thus, the equilibrium distribution is proved to be equivalent to the solution of a time-dependent quasi-variational inequality problem. Results on existence of solutions are discussed as well as a numerical example.
In the modeling of competition on networks it is usually assumed that users either behave following the Wardropian user equilibrium or the system optimum concept. Nevertheless, in several equilibrium situations, for instance in urban traffic flows, intercity freight flows and telecommunication networks, a mixed behavior is observed. This paper presents a time-dependent network-based model shared by two types of users: generalized Nash players and user equilibrium players. Generalized Nash players have a significant impact on the load of the network, whereas user equilibrium players have a negligible impact. Both classes of players choose the paths to send their flows so as to minimize their own costs, but they apply different optimization criteria. Players interact via some implicit balance constraints which depend on the equilibrium solution. Thus, the equilibrium distribution is proved to be equivalent to the solution of a time-dependent quasi-variational inequality problem. Results on existence of solutions are discussed as well as a numerical example.
2009, 5(2): 381-390
doi: 10.3934/jimo.2009.5.381
+[Abstract](1915)
+[PDF](132.3KB)
Abstract:
A class of generalized proximal point algorithms based on the $A-$ maximal monotonicity is introduced, and then it is applied to the approximation solvability of a general class of nonlinear inclusion problems using the generalized resolvent operator technique. This seems to be of interest in the sense that it is application-oriented.
A class of generalized proximal point algorithms based on the $A-$ maximal monotonicity is introduced, and then it is applied to the approximation solvability of a general class of nonlinear inclusion problems using the generalized resolvent operator technique. This seems to be of interest in the sense that it is application-oriented.
2009, 5(2): 391-402
doi: 10.3934/jimo.2009.5.391
+[Abstract](2146)
+[PDF](170.1KB)
Abstract:
This paper revisits the model of the censored newsvendor presented by Ding, Puterman and Bisi [8], We analyze that model in an infinite-horizon context by using the interesting concept of unnormalized probabilities. The unnormalized probabilities considerably simplify the dynamic programming equation and facilitate the proof of the existence of an optimal policy. They can also be used to give a simple, alternative proof to Ding et al.'s claim that the myopic order quantity is always less than or equal to the optimal order quantity. Importantly, the concept of unnormalized probabilities can be used to treat other important operations research problems with partial observations.
This paper revisits the model of the censored newsvendor presented by Ding, Puterman and Bisi [8], We analyze that model in an infinite-horizon context by using the interesting concept of unnormalized probabilities. The unnormalized probabilities considerably simplify the dynamic programming equation and facilitate the proof of the existence of an optimal policy. They can also be used to give a simple, alternative proof to Ding et al.'s claim that the myopic order quantity is always less than or equal to the optimal order quantity. Importantly, the concept of unnormalized probabilities can be used to treat other important operations research problems with partial observations.
2009, 5(2): 403-415
doi: 10.3934/jimo.2009.5.403
+[Abstract](2327)
+[PDF](191.0KB)
Abstract:
The Dual-nu Support Vector Machine (SVM) is an effective method in pattern recognition and target detection. It improves on the Dual-C SVM, and offers competitive performance in detection and computation with traditional classifiers. We show that the regularisation parameters Dual-nu and Dual-C can be set such that the same SVM solution is obtained. We present the process of determining the related parameters of one form from the solution of a trained SVM of the other form, and test the relationship with a digit recognition problem. The link between the Dual-nu and Dual-C parameters allows users to use Dual-nu for ease of training, and to switch between the two forms readily.
The Dual-nu Support Vector Machine (SVM) is an effective method in pattern recognition and target detection. It improves on the Dual-C SVM, and offers competitive performance in detection and computation with traditional classifiers. We show that the regularisation parameters Dual-nu and Dual-C can be set such that the same SVM solution is obtained. We present the process of determining the related parameters of one form from the solution of a trained SVM of the other form, and test the relationship with a digit recognition problem. The link between the Dual-nu and Dual-C parameters allows users to use Dual-nu for ease of training, and to switch between the two forms readily.
2019 Impact Factor: 1.366
Readers
Authors
Editors
Referees
Librarians
Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]