doi: 10.3934/naco.2020021

Numerical simulations of a rolling ball robot actuated by internal point masses

1. 

Department of Mathematical and Statistical Sciences, University of Alberta, CAB 632, Edmonton, AB T6G 2G1, Canada, and, ATCO SpaceLab, 5302 Forand ST SW, Calgary, AB T3E 8B4, Canada

2. 

Institute for Mathematics and its Applications, University of Minnesota, Twin Cities, 207 Church ST SE, 306 Lind Hall, Minneapolis, MN 55455, USA

* Corresponding author: Stuart Rogers

Received  June 2019 Revised  January 2020 Published  March 2020

The controlled motion of a rolling ball actuated by internal point masses that move along arbitrarily-shaped rails fixed within the ball is considered. The controlled equations of motion are solved numerically using a predictor-corrector continuation method, starting from an initial solution obtained via a direct method, to realize trajectory tracking and obstacle avoidance maneuvers.

Citation: Vakhtang Putkaradze, Stuart Rogers. Numerical simulations of a rolling ball robot actuated by internal point masses. Numerical Algebra, Control & Optimization, doi: 10.3934/naco.2020021
References:
[1]

Community portal for automatic differentiation, , 2016, http://www.autodiff.org/., Google Scholar

[2]

E. Allgower and K. Georg, Continuation and path following, Acta Numerica, 2 (1993), 1-64.  doi: 10.1017/s0962492900002336.  Google Scholar

[3]

U. AscherJ. Christiansen and R. Russell, Algorithm 569: Colsys: Collocation software for boundary-value odes [d2], ACM Transactions on Mathematical Software (TOMS), 7 (1981), 223-229.   Google Scholar

[4]

U. Ascher, R. Mattheij and R. Russell, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, Vol. 13, SIAM, 1994. doi: 10.1137/1.9781611971231.  Google Scholar

[5]

U. Ascher and R. Spiteri, Collocation software for boundary value differential-algebraic equations, SIAM Journal on Scientific Computing, 15 (1994), 938-952.  doi: 10.1137/0915056.  Google Scholar

[6]

W. AuzingerG. KneislO. Koch and E. Weinmüller, A collocation code for singular boundary value problems in ordinary differential equations, Numerical Algorithms, 33 (2003), 27-39.  doi: 10.1023/A:1025531130904.  Google Scholar

[7]

G. Bader and U. Ascher, A new basis implementation for a mixed order boundary value ode solver, SIAM Journal on Scientific and Statistical Computing, 8 (1987), 483-500.  doi: 10.1137/0908047.  Google Scholar

[8]

G. Bader and P. Kunkel, Continuation and collocation for parameter-dependent boundary value problems, SIAM Journal on Scientific and Statistical Computing, 10 (1989), 72-88.  doi: 10.1137/0910007.  Google Scholar

[9]

D. Baraff, Physically based modeling: Rigid body simulation, SIGGRAPH Course Notes, ACM SIGGRAPH, 2 (2001), 1-2.   Google Scholar

[10]

Z. Bashir-AliJ. Cash and H. Silva, Lobatto deferred correction for stiff two-point boundary value problems, Computers & Mathematics with Applications, 36 (1998), 59-69.  doi: 10.1016/S0898-1221(98)80009-6.  Google Scholar

[11]

J. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Vol. 19, SIAM, 2010. doi: 10.1137/1.9780898718577.  Google Scholar

[12]

Á. Birkisson, Numerical Solution of Nonlinear Boundary Value Problems for Ordinary Differential Equations in the Continuous Framework, PhD thesis, University of Oxford, 2013. Google Scholar

[13]

J. Boisvert, A Problem-Solving Environment for the Numerical Solution of Boundary Value Problems, PhD thesis, University of Saskatchewan, 2011. Google Scholar

[14]

J. Boisvert, P. Muir and R. Spiteri, A runge-kutta bvode solver with global error and defect control, , ACM Transactions on Mathematical Software (TOMS), 39 (2013), 11. doi: 10.1145/2427023.2427028.  Google Scholar

[15]

J. Boyd, Solving Transcendental Equations: The Chebyshev Polynomial Proxy and Other Numerical Rootfinders, Perturbation Series, and Oracles, Vol. 139, SIAM, 2014. doi: 10.1137/1.9781611973525.  Google Scholar

[16]

A. Bryson, Dynamic Optimization, Vol. 1, Prentice Hall, 1999. Google Scholar

[17] A. Bryson and Y.-C. Ho, Applied Optimal Control: Optimization, Estimation and Control, CRC Press, 1975.   Google Scholar
[18]

J.-B. CaillauO. Cots and J. Gergaud, Differential continuation for regular optimal control problems, Optimization Methods and Software, 27 (2012), 177-196.  doi: 10.1080/10556788.2011.593625.  Google Scholar

[19]

J. Cash, D. Hollevoet, F. Mazzia and A. Nagy, Algorithm 927: The matlab code bvptwp. m for the numerical solution of two point boundary value problems, , ACM Transactions on Mathematical Software (TOMS), 39 (2013), 15. doi: 10.1145/2427023.2427032.  Google Scholar

[20]

J. Cash and F. Mazzia, A new mesh selection algorithm, based on conditioning, for two-point boundary value codes, Journal of Computational and Applied Mathematics, 184 (2005), 362-381.  doi: 10.1016/j.cam.2005.01.016.  Google Scholar

[21]

J. Cash and F. Mazzia, Hybrid mesh selection algorithms based on conditioning for two-point boundary value problems, JNAIAM J. Numer. Anal. Indust. Appl. Math, 1 (2006), 81-90.   Google Scholar

[22]

J. CashG. Moore and R. Wright, An automatic continuation strategy for the solution of singularly perturbed nonlinear boundary value problems, ACM Transactions on Mathematical Software (TOMS), 27 (2001), 245-266.  doi: 10.1006/jcph.1995.1212.  Google Scholar

[23]

J. Cash and M. Wright, A deferred correction method for nonlinear two-point boundary value problems: implementation and numerical evaluation, SIAM Journal on Scientific and Statistical Computing, 12 (1991), 971-989.  doi: 10.1137/0912052.  Google Scholar

[24]

F. Chernousko and A. Lyubushin, Method of successive approximations for solution of optimal control problems, Optimal Control Applications and Methods, 3 (1982), 101-114.  doi: 10.1002/oca.4660030201.  Google Scholar

[25]

G. Corliss, C. Faure, A. Griewank, L. Hascoet and U. Naumann, Automatic Differentiation Of Algorithms: From Simulation To Optimization, Vol. 1, Springer Science $ & $ Business Media, 2002.  Google Scholar

[26]

H. Dankowicz and F. Schilder, Recipes for Continuation, SIAM, 2013. doi: 10.1137/1.9781611972573.  Google Scholar

[27]

D. Davidenko, On a new method of numerical solution of systems of nonlinear equations, Dokl. Akad. Nauk SSSR, 88 (1953), 601-602.   Google Scholar

[28]

D. Davidenko, The approximate solution of sets of nonlinear equations, Ukr. Mat. Zh, 5 (1953), 196-206.   Google Scholar

[29]

P. Deuflhard, Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Vol. 35, Springer Science $ & $ Business Media, 2011. doi: 10.1007/978-3-642-23899-4.  Google Scholar

[30]

E. Doedel, T. Fairgrieve, B. Sandstede, A. Champneys, Y. Kuznetsov and X. Wang, Auto-07p: Continuation and bifurcation software for ordinary differential equations., Google Scholar

[31]

J. Fike and J. Alonso, The development of hyper-dual numbers for exact second-derivative calculations, , AIAA paper, 886 (2011), 124. Google Scholar

[32]

J. Fike and J. Alonso, Automatic differentiation through the use of hyper-dual numbers for second derivatives, , in Recent Advances in Algorithmic Differentiation, Springer, (2012), 163–173. doi: 10.1007/978-3-642-30023-3_15.  Google Scholar

[33]

J. Fike, S. Jongsma, J. Alonso and E. Van Der Weide, Optimization with gradient and hessian information calculated using hyper-dual numbers, , AIAA paper, 3807 (2011), 2011. Google Scholar

[34]

J. Frisvad, Building an orthonormal basis from a 3d unit vector without normalization, Journal of Graphics Tools, 16 (2012), 151-159.   Google Scholar

[35]

P. GillW. Murray and M. Saunders, SNOPT: An SQP algorithm for large-scale constrained optimization, SIAM Rev., 47 (2005), 99-131.  doi: 10.1137/S0036144504446096.  Google Scholar

[36]

P. Gill, W. Murray, M. Saunders and E. Wong, User's Guide for SNOPT 7.6: Software for Large-Scale Nonlinear Programming, Center for Computational Mathematics Report CCoM 17-1, Department of Mathematics, University of California, San Diego, La Jolla, CA, 2017. doi: 10.1137/S1052623499350013.  Google Scholar

[37]

B. Graf, Quaternions and dynamics, , arXiv preprint arXiv: 0811.2889.  Google Scholar

[38]

R. GuptaA. Bloch and I. Kolmanovsky, Combined homotopy and neighboring extremal optimal control, Optimal Control Applications and Methods, 38 (2017), 459-469.  doi: 10.1002/oca.2253.  Google Scholar

[39]

Y. Hardy, K. Tan and W.-H. Steeb, Computer Algebra with SymbolicC++, World Scientific Publishing Company, 2008. Google Scholar

[40] D. Holm, Geometric Mechanics: Rotating, Translating, and Rolling, $2^{nd}$ edition, Geometric Mechanics, Imperial College Press, 2011.  doi: 10.1142/p802.  Google Scholar
[41]

D. Hull, Optimal Control Theory for Applications, Springer Science & Business Media, 2013. doi: 10.1007/978-1-4757-4180-3.  Google Scholar

[42]

L. Kantorovich, On newton's method for functional equations, Dokl. Akad. Nauk SSSR, 59 (1948), 1237-1240.   Google Scholar

[43]

J. Kierzenka and L. Shampine, A bvp solver that controls residual and error, JNAIAM J. Numer. Anal. Ind. Appl. Math, 3 (2008), 27-41.   Google Scholar

[44]

G. KitzhoferO. KochG. PulvererC. Simon and E. Weinmüller, The new matlab code bvpsuite for the solution of singular implicit bvps, J. Numer. Anal. Indust. Appl. Math, 5 (2010), 113-134.   Google Scholar

[45]

G. KitzhoferO. Koch and E. Weinmüller, Pathfollowing for essentially singular boundary value problems with application to the complex ginzburg-landau equation, BIT Numerical Mathematics, 49 (2009), 141-160.  doi: 10.1007/s10543-008-0208-6.  Google Scholar

[46]

I. Krylov and F. Chernousko, On a method of successive approximations for the solution of problems of optimal control, USSR Computational Mathematics and Mathematical Physics, 2 (1963), 1371-1382.   Google Scholar

[47]

I. Krylov and F. Chernousko, An algorithm for the method of successive approximations in optimal control problems, USSR Computational Mathematics and Mathematical Physics, 12 (1972), 15-38.   Google Scholar

[48]

V. Kungurtsev and J. Jäschke, A predictor-corrector path-following algorithm for dual-degenerate parametric optimization problems, SIAM Journal on Optimization, 27 (2017), 538-564.  doi: 10.1137/16M1068736.  Google Scholar

[49]

G. Lantoine, R. Russell and T. Dargent, Using multicomplex variables for automatic computation of high-order derivatives, , ACM Transactions on Mathematical Software (TOMS), 38 (2012), 16. doi: 10.1145/2168773.2168774.  Google Scholar

[50]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.   Google Scholar

[51]

A. Lyubushin, Modifications of the method of successive approximations for solving optimal control problems, USSR Computational Mathematics and Mathematical Physics, 22 (1982), 29-34.   Google Scholar

[52]

J. Martins, P. Sturdza and J. Alonso, The connection between the complex-step derivative approximation and algorithmic differentiation, , AIAA Paper, 921 (2001), 2001. doi: 10.1145/838250.838251.  Google Scholar

[53]

J. MartinsP. Sturdza and J. Alonso, The complex-step derivative approximation, ACM Transactions on Mathematical Software (TOMS), 29 (2003), 245-262.  doi: 10.1145/838250.838251.  Google Scholar

[54]

P. Muir, Optimal discrete and continuous mono-implicit runge–kutta schemes for bvodes, Advances in Computational Mathematics, 10 (1999), 135-167.  doi: 10.1023/A:1018926631734.  Google Scholar

[55]

U. Naumann, The Art of Differentiating Computer Programs: An Introduction to Algorithmic Differentiation, Vol. 24, SIAM, 2012.  Google Scholar

[56]

M. Neuenhofen, Review of theory and implementation of hyper-dual numbers for first and second order automatic differentiation, , arXiv preprint arXiv: 1801.03614. Google Scholar

[57]

M. Patterson and A. Rao, Gpops-ii: A matlab software for solving multiple-phase optimal control problems using hp-adaptive gaussian quadrature collocation methods and sparse nonlinear programming, , ACM Transactions on Mathematical Software (TOMS), 41 (2014), 1. doi: 10.1145/2558904.  Google Scholar

[58]

V. Putkaradze and S. Rogers, On the optimal control of a rolling ball robot actuated by internal point masses, Journal of Dynamic Systems, Measurement, and Control, 142 (2020), 051002, 22 pages. doi: 10.1115/1.4046104.  Google Scholar

[59]

V. Putkaradze and S. Rogers, Constraint control of nonholonomic mechanical systems, Journal of Nonlinear Science, 28 (2018), 193-234.  doi: 10.1007/s00332-017-9406-1.  Google Scholar

[60]

V. Putkaradze and S. Rogers, On the dynamics of a rolling ball actuated by internal point masses, Meccanica, 53 (2018), 3839-3868.  doi: 10.1007/s11012-018-0904-5.  Google Scholar

[61]

V. Putkaradze and S. Rogers, On the normal force and static friction acting on a rolling ball actuated by internal point masses, Regular and Chaotic Dynamics, 24 (2019), 145-170.  doi: 10.1134/S1560354719020023.  Google Scholar

[62]

L. Rall, Davidenko's Method for the Solution of Nonlinear Operator Equations, Technical report, University of Wisconsin, Madison, Mathematics Research Center, 1968. Google Scholar

[63]

G. Rozenblat, On the choice of physically realizable parameters when studying the dynamics of spherical and ellipsoidal rigid bodies, Mechanics of Solids, 51 (2016), 415-423.   Google Scholar

[64]

L. Shampine, J. Kierzenka and M. Reichelt, Solving boundary value problems for ordinary differential equations in matlab with bvp4c, , Tutorial notes, 437–448. Google Scholar

[65]

W. Squire and G. Trapp, Using complex variables to estimate derivatives of real functions, Siam Review, 40 (1998), 110-112.  doi: 10.1137/S003614459631241X.  Google Scholar

[66]

B. Stevens, F. Lewis and E. Johnson, Aircraft Control and Simulation: Dynamics, Controls Design, and Autonomous Systems, John Wiley $ & $ Sons, 2015. Google Scholar

[67]

A. Wächter and L. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming, 106 (2006), 25-57.  doi: 10.1007/s10107-004-0559-y.  Google Scholar

[68]

E. Weinmüller and R. Winkler, Pathfollowing algorithm for singular boundary value problems, , ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 68 (1988), 527–537. doi: 10.1002/zamm.19880681102.  Google Scholar

[69]

M. Weinstein, M. Patterson and A. Rao, Utilizing the algorithmic differentiation package adigator for solving optimal control problems using direct collocation, , in AIAA Guidance, Navigation, and Control Conference, (2015), 1085. Google Scholar

[70]

M. Weinstein and A. Rao, Algorithm 984: Adigator, a toolbox for the algorithmic differentiation of mathematical functions in matlab using source transformation via operator overloading, , ACM Transactions on Mathematical Software (TOMS), 44 (2017), 21. doi: 10.1145/3104990.  Google Scholar

[71]

W. Zangwill and C. Garcia, Pathways to Solutions, Fixed Points, and Equilibria, Prentice Hall, 1981. doi: 10.2307/2975712.  Google Scholar

show all references

References:
[1]

Community portal for automatic differentiation, , 2016, http://www.autodiff.org/., Google Scholar

[2]

E. Allgower and K. Georg, Continuation and path following, Acta Numerica, 2 (1993), 1-64.  doi: 10.1017/s0962492900002336.  Google Scholar

[3]

U. AscherJ. Christiansen and R. Russell, Algorithm 569: Colsys: Collocation software for boundary-value odes [d2], ACM Transactions on Mathematical Software (TOMS), 7 (1981), 223-229.   Google Scholar

[4]

U. Ascher, R. Mattheij and R. Russell, Numerical Solution of Boundary Value Problems for Ordinary Differential Equations, Vol. 13, SIAM, 1994. doi: 10.1137/1.9781611971231.  Google Scholar

[5]

U. Ascher and R. Spiteri, Collocation software for boundary value differential-algebraic equations, SIAM Journal on Scientific Computing, 15 (1994), 938-952.  doi: 10.1137/0915056.  Google Scholar

[6]

W. AuzingerG. KneislO. Koch and E. Weinmüller, A collocation code for singular boundary value problems in ordinary differential equations, Numerical Algorithms, 33 (2003), 27-39.  doi: 10.1023/A:1025531130904.  Google Scholar

[7]

G. Bader and U. Ascher, A new basis implementation for a mixed order boundary value ode solver, SIAM Journal on Scientific and Statistical Computing, 8 (1987), 483-500.  doi: 10.1137/0908047.  Google Scholar

[8]

G. Bader and P. Kunkel, Continuation and collocation for parameter-dependent boundary value problems, SIAM Journal on Scientific and Statistical Computing, 10 (1989), 72-88.  doi: 10.1137/0910007.  Google Scholar

[9]

D. Baraff, Physically based modeling: Rigid body simulation, SIGGRAPH Course Notes, ACM SIGGRAPH, 2 (2001), 1-2.   Google Scholar

[10]

Z. Bashir-AliJ. Cash and H. Silva, Lobatto deferred correction for stiff two-point boundary value problems, Computers & Mathematics with Applications, 36 (1998), 59-69.  doi: 10.1016/S0898-1221(98)80009-6.  Google Scholar

[11]

J. Betts, Practical Methods for Optimal Control and Estimation Using Nonlinear Programming, Vol. 19, SIAM, 2010. doi: 10.1137/1.9780898718577.  Google Scholar

[12]

Á. Birkisson, Numerical Solution of Nonlinear Boundary Value Problems for Ordinary Differential Equations in the Continuous Framework, PhD thesis, University of Oxford, 2013. Google Scholar

[13]

J. Boisvert, A Problem-Solving Environment for the Numerical Solution of Boundary Value Problems, PhD thesis, University of Saskatchewan, 2011. Google Scholar

[14]

J. Boisvert, P. Muir and R. Spiteri, A runge-kutta bvode solver with global error and defect control, , ACM Transactions on Mathematical Software (TOMS), 39 (2013), 11. doi: 10.1145/2427023.2427028.  Google Scholar

[15]

J. Boyd, Solving Transcendental Equations: The Chebyshev Polynomial Proxy and Other Numerical Rootfinders, Perturbation Series, and Oracles, Vol. 139, SIAM, 2014. doi: 10.1137/1.9781611973525.  Google Scholar

[16]

A. Bryson, Dynamic Optimization, Vol. 1, Prentice Hall, 1999. Google Scholar

[17] A. Bryson and Y.-C. Ho, Applied Optimal Control: Optimization, Estimation and Control, CRC Press, 1975.   Google Scholar
[18]

J.-B. CaillauO. Cots and J. Gergaud, Differential continuation for regular optimal control problems, Optimization Methods and Software, 27 (2012), 177-196.  doi: 10.1080/10556788.2011.593625.  Google Scholar

[19]

J. Cash, D. Hollevoet, F. Mazzia and A. Nagy, Algorithm 927: The matlab code bvptwp. m for the numerical solution of two point boundary value problems, , ACM Transactions on Mathematical Software (TOMS), 39 (2013), 15. doi: 10.1145/2427023.2427032.  Google Scholar

[20]

J. Cash and F. Mazzia, A new mesh selection algorithm, based on conditioning, for two-point boundary value codes, Journal of Computational and Applied Mathematics, 184 (2005), 362-381.  doi: 10.1016/j.cam.2005.01.016.  Google Scholar

[21]

J. Cash and F. Mazzia, Hybrid mesh selection algorithms based on conditioning for two-point boundary value problems, JNAIAM J. Numer. Anal. Indust. Appl. Math, 1 (2006), 81-90.   Google Scholar

[22]

J. CashG. Moore and R. Wright, An automatic continuation strategy for the solution of singularly perturbed nonlinear boundary value problems, ACM Transactions on Mathematical Software (TOMS), 27 (2001), 245-266.  doi: 10.1006/jcph.1995.1212.  Google Scholar

[23]

J. Cash and M. Wright, A deferred correction method for nonlinear two-point boundary value problems: implementation and numerical evaluation, SIAM Journal on Scientific and Statistical Computing, 12 (1991), 971-989.  doi: 10.1137/0912052.  Google Scholar

[24]

F. Chernousko and A. Lyubushin, Method of successive approximations for solution of optimal control problems, Optimal Control Applications and Methods, 3 (1982), 101-114.  doi: 10.1002/oca.4660030201.  Google Scholar

[25]

G. Corliss, C. Faure, A. Griewank, L. Hascoet and U. Naumann, Automatic Differentiation Of Algorithms: From Simulation To Optimization, Vol. 1, Springer Science $ & $ Business Media, 2002.  Google Scholar

[26]

H. Dankowicz and F. Schilder, Recipes for Continuation, SIAM, 2013. doi: 10.1137/1.9781611972573.  Google Scholar

[27]

D. Davidenko, On a new method of numerical solution of systems of nonlinear equations, Dokl. Akad. Nauk SSSR, 88 (1953), 601-602.   Google Scholar

[28]

D. Davidenko, The approximate solution of sets of nonlinear equations, Ukr. Mat. Zh, 5 (1953), 196-206.   Google Scholar

[29]

P. Deuflhard, Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Vol. 35, Springer Science $ & $ Business Media, 2011. doi: 10.1007/978-3-642-23899-4.  Google Scholar

[30]

E. Doedel, T. Fairgrieve, B. Sandstede, A. Champneys, Y. Kuznetsov and X. Wang, Auto-07p: Continuation and bifurcation software for ordinary differential equations., Google Scholar

[31]

J. Fike and J. Alonso, The development of hyper-dual numbers for exact second-derivative calculations, , AIAA paper, 886 (2011), 124. Google Scholar

[32]

J. Fike and J. Alonso, Automatic differentiation through the use of hyper-dual numbers for second derivatives, , in Recent Advances in Algorithmic Differentiation, Springer, (2012), 163–173. doi: 10.1007/978-3-642-30023-3_15.  Google Scholar

[33]

J. Fike, S. Jongsma, J. Alonso and E. Van Der Weide, Optimization with gradient and hessian information calculated using hyper-dual numbers, , AIAA paper, 3807 (2011), 2011. Google Scholar

[34]

J. Frisvad, Building an orthonormal basis from a 3d unit vector without normalization, Journal of Graphics Tools, 16 (2012), 151-159.   Google Scholar

[35]

P. GillW. Murray and M. Saunders, SNOPT: An SQP algorithm for large-scale constrained optimization, SIAM Rev., 47 (2005), 99-131.  doi: 10.1137/S0036144504446096.  Google Scholar

[36]

P. Gill, W. Murray, M. Saunders and E. Wong, User's Guide for SNOPT 7.6: Software for Large-Scale Nonlinear Programming, Center for Computational Mathematics Report CCoM 17-1, Department of Mathematics, University of California, San Diego, La Jolla, CA, 2017. doi: 10.1137/S1052623499350013.  Google Scholar

[37]

B. Graf, Quaternions and dynamics, , arXiv preprint arXiv: 0811.2889.  Google Scholar

[38]

R. GuptaA. Bloch and I. Kolmanovsky, Combined homotopy and neighboring extremal optimal control, Optimal Control Applications and Methods, 38 (2017), 459-469.  doi: 10.1002/oca.2253.  Google Scholar

[39]

Y. Hardy, K. Tan and W.-H. Steeb, Computer Algebra with SymbolicC++, World Scientific Publishing Company, 2008. Google Scholar

[40] D. Holm, Geometric Mechanics: Rotating, Translating, and Rolling, $2^{nd}$ edition, Geometric Mechanics, Imperial College Press, 2011.  doi: 10.1142/p802.  Google Scholar
[41]

D. Hull, Optimal Control Theory for Applications, Springer Science & Business Media, 2013. doi: 10.1007/978-1-4757-4180-3.  Google Scholar

[42]

L. Kantorovich, On newton's method for functional equations, Dokl. Akad. Nauk SSSR, 59 (1948), 1237-1240.   Google Scholar

[43]

J. Kierzenka and L. Shampine, A bvp solver that controls residual and error, JNAIAM J. Numer. Anal. Ind. Appl. Math, 3 (2008), 27-41.   Google Scholar

[44]

G. KitzhoferO. KochG. PulvererC. Simon and E. Weinmüller, The new matlab code bvpsuite for the solution of singular implicit bvps, J. Numer. Anal. Indust. Appl. Math, 5 (2010), 113-134.   Google Scholar

[45]

G. KitzhoferO. Koch and E. Weinmüller, Pathfollowing for essentially singular boundary value problems with application to the complex ginzburg-landau equation, BIT Numerical Mathematics, 49 (2009), 141-160.  doi: 10.1007/s10543-008-0208-6.  Google Scholar

[46]

I. Krylov and F. Chernousko, On a method of successive approximations for the solution of problems of optimal control, USSR Computational Mathematics and Mathematical Physics, 2 (1963), 1371-1382.   Google Scholar

[47]

I. Krylov and F. Chernousko, An algorithm for the method of successive approximations in optimal control problems, USSR Computational Mathematics and Mathematical Physics, 12 (1972), 15-38.   Google Scholar

[48]

V. Kungurtsev and J. Jäschke, A predictor-corrector path-following algorithm for dual-degenerate parametric optimization problems, SIAM Journal on Optimization, 27 (2017), 538-564.  doi: 10.1137/16M1068736.  Google Scholar

[49]

G. Lantoine, R. Russell and T. Dargent, Using multicomplex variables for automatic computation of high-order derivatives, , ACM Transactions on Mathematical Software (TOMS), 38 (2012), 16. doi: 10.1145/2168773.2168774.  Google Scholar

[50]

Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature, 521 (2015), 436-444.   Google Scholar

[51]

A. Lyubushin, Modifications of the method of successive approximations for solving optimal control problems, USSR Computational Mathematics and Mathematical Physics, 22 (1982), 29-34.   Google Scholar

[52]

J. Martins, P. Sturdza and J. Alonso, The connection between the complex-step derivative approximation and algorithmic differentiation, , AIAA Paper, 921 (2001), 2001. doi: 10.1145/838250.838251.  Google Scholar

[53]

J. MartinsP. Sturdza and J. Alonso, The complex-step derivative approximation, ACM Transactions on Mathematical Software (TOMS), 29 (2003), 245-262.  doi: 10.1145/838250.838251.  Google Scholar

[54]

P. Muir, Optimal discrete and continuous mono-implicit runge–kutta schemes for bvodes, Advances in Computational Mathematics, 10 (1999), 135-167.  doi: 10.1023/A:1018926631734.  Google Scholar

[55]

U. Naumann, The Art of Differentiating Computer Programs: An Introduction to Algorithmic Differentiation, Vol. 24, SIAM, 2012.  Google Scholar

[56]

M. Neuenhofen, Review of theory and implementation of hyper-dual numbers for first and second order automatic differentiation, , arXiv preprint arXiv: 1801.03614. Google Scholar

[57]

M. Patterson and A. Rao, Gpops-ii: A matlab software for solving multiple-phase optimal control problems using hp-adaptive gaussian quadrature collocation methods and sparse nonlinear programming, , ACM Transactions on Mathematical Software (TOMS), 41 (2014), 1. doi: 10.1145/2558904.  Google Scholar

[58]

V. Putkaradze and S. Rogers, On the optimal control of a rolling ball robot actuated by internal point masses, Journal of Dynamic Systems, Measurement, and Control, 142 (2020), 051002, 22 pages. doi: 10.1115/1.4046104.  Google Scholar

[59]

V. Putkaradze and S. Rogers, Constraint control of nonholonomic mechanical systems, Journal of Nonlinear Science, 28 (2018), 193-234.  doi: 10.1007/s00332-017-9406-1.  Google Scholar

[60]

V. Putkaradze and S. Rogers, On the dynamics of a rolling ball actuated by internal point masses, Meccanica, 53 (2018), 3839-3868.  doi: 10.1007/s11012-018-0904-5.  Google Scholar

[61]

V. Putkaradze and S. Rogers, On the normal force and static friction acting on a rolling ball actuated by internal point masses, Regular and Chaotic Dynamics, 24 (2019), 145-170.  doi: 10.1134/S1560354719020023.  Google Scholar

[62]

L. Rall, Davidenko's Method for the Solution of Nonlinear Operator Equations, Technical report, University of Wisconsin, Madison, Mathematics Research Center, 1968. Google Scholar

[63]

G. Rozenblat, On the choice of physically realizable parameters when studying the dynamics of spherical and ellipsoidal rigid bodies, Mechanics of Solids, 51 (2016), 415-423.   Google Scholar

[64]

L. Shampine, J. Kierzenka and M. Reichelt, Solving boundary value problems for ordinary differential equations in matlab with bvp4c, , Tutorial notes, 437–448. Google Scholar

[65]

W. Squire and G. Trapp, Using complex variables to estimate derivatives of real functions, Siam Review, 40 (1998), 110-112.  doi: 10.1137/S003614459631241X.  Google Scholar

[66]

B. Stevens, F. Lewis and E. Johnson, Aircraft Control and Simulation: Dynamics, Controls Design, and Autonomous Systems, John Wiley $ & $ Sons, 2015. Google Scholar

[67]

A. Wächter and L. Biegler, On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming, Mathematical Programming, 106 (2006), 25-57.  doi: 10.1007/s10107-004-0559-y.  Google Scholar

[68]

E. Weinmüller and R. Winkler, Pathfollowing algorithm for singular boundary value problems, , ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 68 (1988), 527–537. doi: 10.1002/zamm.19880681102.  Google Scholar

[69]

M. Weinstein, M. Patterson and A. Rao, Utilizing the algorithmic differentiation package adigator for solving optimal control problems using direct collocation, , in AIAA Guidance, Navigation, and Control Conference, (2015), 1085. Google Scholar

[70]

M. Weinstein and A. Rao, Algorithm 984: Adigator, a toolbox for the algorithmic differentiation of mathematical functions in matlab using source transformation via operator overloading, , ACM Transactions on Mathematical Software (TOMS), 44 (2017), 21. doi: 10.1145/3104990.  Google Scholar

[71]

W. Zangwill and C. Garcia, Pathways to Solutions, Fixed Points, and Equilibria, Prentice Hall, 1981. doi: 10.2307/2975712.  Google Scholar

Figure 1.  A ball of radius $ r $ and mass $ m_0 $ rolls without slipping on a horizontal surface in the presence of a uniform gravitational field of magnitude $ g $. The ball's geometric center, center of mass, and contact point with the horizontal surface are denoted by GC, $ m_0 $, and CP, respectively. The spatial frame has origin located at height $ r $ above the horizontal surface and orthonormal axes $ \mathbf{e}_1 $, $ \mathbf{e}_2 $, and $ \mathbf{e}_3 $. The body frame has origin located at the ball's center of mass (denoted by $ m_0 $) and orthonormal axes $ \mathbf{E}_1 $, $ \mathbf{E}_2 $, and $ \mathbf{E}_3 $. The ball's motion is actuated by $ n $ point masses, each of mass $ m_i $, $ 1 \le i \le n $, and each moving along its own rail fixed inside the ball. The $ i^\mathrm{th} $ rail is depicted here by the dashed hoop. The trajectory of the $ i^\mathrm{th} $ rail, with respect to the body frame translated to the GC, is denoted by $ {\boldsymbol{\zeta}}_i $ and is parameterized by $ \theta_i $. All vectors inside the ball are expressed with respect to the body frame, while all vectors outside the ball are expressed with respect to the spatial frame
Figure 2.  A disk of radius $ r $ and mass $ m_0 $ rolls without slipping in the $ \mathbf{e}_1 $-$ \mathbf{e}_3 $ plane. $ \mathbf{e}_2 $ and $ \mathbf{E}_2 $ are directed into the page and are omitted from the figure. The disk's center of mass is denoted by $ m_0 $. The disk's motion is actuated by $ n $ point masses, each of mass $ m_i $, $ 1 \le i \le n $, and each moving along its own rail fixed inside the disk. The point mass depicted here by $ m_i $ moves along a circular hoop in the disk that is not centered on the disk's geometric center (GC). The disk's orientation is determined by $ \phi $, the angle measured counterclockwise from $ \mathbf{e}_1 $ to $ \mathbf{E}_1 $
Figure 3.  The disk of radius $ r = 1 $ actuated by $ 4 $ control masses, $ m_1 $, $ m_2 $, $ m_3 $, and $ m_4 $, each on its own circular control rail. The control rail radii are $ r_1 = .9 $, $ r_2 = .6\overline{3} $, $ r_3 = .3\overline{6} $, and $ r_4 = .1 $. The location of the disk's CM is denoted by $ m_0 $
Figure 4.  Numerical solutions of the rolling disk optimal control problem (19) using $ 4 $ control masses for $ \gamma_1 = \gamma_2 = \gamma_3 = \gamma_4 = .1 $ and for fixed initial and final times. The direct method results for $ \alpha = .1 $ are shown in the left column, while the predictor-corrector continuation indirect method results for $ \alpha \approx 272 $ are shown in the right column. The direct method solution tracks the desired GC path crudely, whereas the indirect method solution tracks the desired GC path very accurately at the expense of larger magnitude controls
Figure 5.  Numerical solutions of the rolling disk optimal control problem (19) using $ 4 $ control masses for $ \gamma_1 = \gamma_2 = \gamma_3 = \gamma_4 = .1 $ and for fixed initial and final times. The direct method results for $ \alpha = .1 $ are shown in the left column, while the predictor-corrector continuation indirect method results for $ \alpha \approx 272 $ are shown in the right column. The disk does not detach from the surface since the magnitude of the normal force is always positive. The disk rolls without slipping if $ \mu_\mathrm{s} \ge .07799 $ for the direct method solution and if $ \mu_\mathrm{s} \ge .3502 $ for the indirect method solution. That is, the indirect method solution requires a much larger coefficient of static friction
Figure 6.  Evolution of various parameters and variables during the predictor-corrector continuation indirect method, which starts from the direct method solution, used to solve the rolling disk optimal control problem (19). $ \mu $ decreases monotonically, while $ \alpha $ and $ J $ increase monotonically
Figure 7.  Evolution of various parameters and variables during an extended run of the predictor-corrector continuation indirect method, which starts from the direct method solution, used to solve the rolling disk optimal control problem (19). Note the turning points at solutions 7, 10, and 18. The minimum of the GC tracking error occurs at solution 7
Figure 8.  The ball of radius $ r = 1 $ actuated by $ 3 $ control masses, $ m_1 $, $ m_2 $, and $ m_3 $, each on its own circular control rail. The control rail radii are $ r_1 = .95 $, $ r_2 = .9 $, and $ r_3 = .85 $. The location of the ball's CM is denoted by $ m_0 $
Figure 9.  Numerical solutions of the rolling ball optimal control problem (37) for sigmoid obstacle avoidance using $ 3 $ control masses for $ \gamma_1 = \gamma_2 = \gamma_3 = 10 $ and for fixed initial and final times. The obstacle centers are located at $ {\boldsymbol{v}}_1 = \begin{bmatrix} v_{1,1} & v_{1,2} \end{bmatrix}^\mathsf{T} = \begin{bmatrix} .2 & .2 \end{bmatrix}^\mathsf{T} $ and $ {\boldsymbol{v}}_2 = \begin{bmatrix} v_{2,1} & v_{2,2} \end{bmatrix}^\mathsf{T} = \begin{bmatrix} .8 & .8 \end{bmatrix}^\mathsf{T} $ and the obstacle radii are $ \rho_1 = \rho_2 = .282 $. The direct method results for obstacle heights at $ h_1 = h_2 = 0 $ are shown in the left column, while the predictor-corrector continuation indirect method results for obstacle heights at $ h_1 = h_2 \approx 9.93 $ are shown in the right column
Figure 10.  Numerical solutions of the rolling ball optimal control problem (37) for sigmoid obstacle avoidance using $ 3 $ control masses for $ \gamma_1 = \gamma_2 = \gamma_3 = 10 $ and for fixed initial and final times. The direct method results for obstacle heights at $ h_1 = h_2 = 0 $ are shown in the left column, while the predictor-corrector continuation indirect method results for obstacle heights at $ h_1 = h_2 \approx 9.93 $ are shown in the right column. The ball does not detach from the surface since the magnitude of the normal force is always positive. The ball rolls without slipping if $ \mu_\mathrm{s} \ge .1055 $ for the direct method solution and if $ \mu_\mathrm{s} \ge .0988 $ for the indirect method solution
Figure 11.  Evolution of various parameters and variables during the predictor-corrector continuation indirect method, which starts from the direct method solution, used to solve the rolling ball optimal control problem (37) for sigmoid obstacle avoidance. Note the turning points at solutions 3 and 4
Figure 12.  Numerical solutions of the rolling ball optimal control problem (37) for $ {\rm ReLU} $ obstacle avoidance using $ 3 $ control masses. First a direct method (left column), followed by predictor-corrector continuation in the obstacle heights $ h_1 = h_2 $ (middle column), and finally predictor-corrector continuation in the control coefficients $ \gamma_1 = \gamma_2 = \gamma_3 $ (right column)
Figure 13.  A direct method (left column) is followed by two rounds of a predictor-corrector continuation indirect method to realize a $ {\rm ReLU} $ obstacle avoidance maneuver for the rolling ball. The first round (middle column) of predictor-corrector continuation increases the obstacle heights $ h_1 = h_2 $, and the second round (right column) of predictor-corrector continuation decreases the control coefficients $ \gamma_1 = \gamma_2 = \gamma_3 $. The ball does not detach from the surface since the magnitude of the normal force is always positive. The ball rolls without slipping if $ \mu_\mathrm{s} \ge .1055 $ for the direct method solution, if $ \mu_\mathrm{s} \ge .09942 $ for the first indirect method solution, and if $ \mu_\mathrm{s} \ge .09917 $ for the second indirect method solution
Figure 14.  Predictor-corrector continuation in the obstacle heights $ h_1 = h_2 $ (left column) is followed by predictor-corrector continuation in the control coefficients $ \gamma_1 = \gamma_2 = \gamma_3 $ (right column) to realize a $ {\rm ReLU} $ obstacle avoidance maneuver for the rolling ball
Figure 15.  Predictor-corrector continuation
Table 1.  Initial condition parameter values for the rolling disk. Refer to (22) and (23)
Parameter Value
$ {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} - \frac{\pi}{2} - \frac{\pi}{2} - \frac{\pi}{2} - \frac{\pi}{2} \end{bmatrix}^\mathsf{T} $
$ \dot {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} 0 0 0 0 \end{bmatrix}^\mathsf{T} $
$ \phi_a $ $ 0 $
$ z_a $ $ 0 $
$ \dot z_a $ $ 0 $
Parameter Value
$ {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} - \frac{\pi}{2} - \frac{\pi}{2} - \frac{\pi}{2} - \frac{\pi}{2} \end{bmatrix}^\mathsf{T} $
$ \dot {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} 0 0 0 0 \end{bmatrix}^\mathsf{T} $
$ \phi_a $ $ 0 $
$ z_a $ $ 0 $
$ \dot z_a $ $ 0 $
Table 2.  Final condition parameter values for the rolling disk. Refer to (23)
Parameter Value
$ \dot {\boldsymbol{\theta}}_b $ $ \begin{bmatrix} 0 0 0 0 \end{bmatrix}^\mathsf{T} $
$ z_b $ $ 1 $
$ \dot z_b $ $ 0 $
Parameter Value
$ \dot {\boldsymbol{\theta}}_b $ $ \begin{bmatrix} 0 0 0 0 \end{bmatrix}^\mathsf{T} $
$ z_b $ $ 1 $
$ \dot z_b $ $ 0 $
Table 3.  Integrand cost function coefficient values for the rolling disk when predictor-corrector continuation is performed in $ \alpha $. Refer to (27)
Parameter Value
$ \alpha(\mu) $ $ .1+\frac{.95-\mu}{.95-.00001}\left(5000-.1\right) $
$ \gamma_1=\gamma_2=\gamma_3=\gamma_4 $ $ .1 $
Parameter Value
$ \alpha(\mu) $ $ .1+\frac{.95-\mu}{.95-.00001}\left(5000-.1\right) $
$ \gamma_1=\gamma_2=\gamma_3=\gamma_4 $ $ .1 $
Table 4.  Initial condition parameter values for the rolling ball. Refer to (43)
Parameter Value
$ {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} 0 2.0369 .7044 \end{bmatrix}^\mathsf{T} $
$ \dot {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ \mathfrak{q}_a $ $ \begin{bmatrix} 1 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {{\boldsymbol{\Omega}}}_a $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{z}}_a $ $ \begin{bmatrix} 0 0 \end{bmatrix}^\mathsf{T} $
Parameter Value
$ {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} 0 2.0369 .7044 \end{bmatrix}^\mathsf{T} $
$ \dot {\boldsymbol{\theta}}_a $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ \mathfrak{q}_a $ $ \begin{bmatrix} 1 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {{\boldsymbol{\Omega}}}_a $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{z}}_a $ $ \begin{bmatrix} 0 0 \end{bmatrix}^\mathsf{T} $
Table 5.  Final condition parameter values for the rolling ball. Refer to (44)
Parameter Value
$ \dot {\boldsymbol{\theta}}_b $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {{\boldsymbol{\Omega}}}_b $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{z}}_b $ $ \begin{bmatrix} 1 1 \end{bmatrix}^\mathsf{T} $
Parameter Value
$ \dot {\boldsymbol{\theta}}_b $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {{\boldsymbol{\Omega}}}_b $ $ \begin{bmatrix} 0 0 0 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{z}}_b $ $ \begin{bmatrix} 1 1 \end{bmatrix}^\mathsf{T} $
Table 6.  Integrand cost function coefficient values for the rolling ball when predictor-corrector continuation is performed in the obstacle heights. Refer to (48)
Parameter Value
$ \gamma_1=\gamma_2=\gamma_3 $ $ 10 $
$ h_1(\mu)=h_2(\mu) $ $ \frac{.95-\mu}{.95-.00001}\left(1000\right) $
$ {\boldsymbol{v}}_1 $ $ \begin{bmatrix} .2 .2 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{v}}_2 $ $ \begin{bmatrix} .8 .8 \end{bmatrix}^\mathsf{T} $
$ \rho_1=\rho_2 $ $ .282 $
Parameter Value
$ \gamma_1=\gamma_2=\gamma_3 $ $ 10 $
$ h_1(\mu)=h_2(\mu) $ $ \frac{.95-\mu}{.95-.00001}\left(1000\right) $
$ {\boldsymbol{v}}_1 $ $ \begin{bmatrix} .2 .2 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{v}}_2 $ $ \begin{bmatrix} .8 .8 \end{bmatrix}^\mathsf{T} $
$ \rho_1=\rho_2 $ $ .282 $
Table 7.  Integrand cost function coefficient values for the rolling ball when a second round of predictor-corrector continuation is performed in the control coefficients. Refer to (48)
Parameter Value
$ \gamma_1(\mu)=\gamma_2(\mu)=\gamma_3(\mu) $ $ 10+\frac{.95-\mu}{.95-.00001}\left(-1000-10\right) $
$ h_1=h_2 $ $ 7.846\mathrm{e}{8} $
$ {\boldsymbol{v}}_1 $ $ \begin{bmatrix} .2 .2 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{v}}_2 $ $ \begin{bmatrix} .8 .8 \end{bmatrix}^\mathsf{T} $
$ \rho_1=\rho_2 $ $ .282 $
Parameter Value
$ \gamma_1(\mu)=\gamma_2(\mu)=\gamma_3(\mu) $ $ 10+\frac{.95-\mu}{.95-.00001}\left(-1000-10\right) $
$ h_1=h_2 $ $ 7.846\mathrm{e}{8} $
$ {\boldsymbol{v}}_1 $ $ \begin{bmatrix} .2 .2 \end{bmatrix}^\mathsf{T} $
$ {\boldsymbol{v}}_2 $ $ \begin{bmatrix} .8 .8 \end{bmatrix}^\mathsf{T} $
$ \rho_1=\rho_2 $ $ .282 $
Table 8.  Explanation of shorthand notation for zeroth and first derivatives of $ \hat{\mathbf{f}} $ and first and second derivatives of $ \hat{H} $ used in (86) and (87)
Shorthand $ \mathbf{\vert} $ Extended $ \mathbf{\vert} $ Normalized $ \mathbf{\vert} $ Un-Normalized
$ \mathbf{\vert} $ Shorthand $ \mathbf{\vert} $ $ \mathbf{\vert} $
$ \hat{\mathbf{f}} $ = $ \left. \hat{\mathbf{f}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} $ = $ \left. \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} $ = $ \left. \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{t} $ = $ \left. \hat{\mathbf{f}}_{t} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{t} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{t} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{\mu} $ = $ \left. \hat{\mathbf{f}}_{\mu} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{\mu} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{\mu} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s) ,\mu\right)} $ = $ \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }} t} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }} t} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{H}_{{{\boldsymbol {x} }} t} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }} t} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }} \mu} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }} \mu} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{H}_{{{\boldsymbol {x} }} \mu} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }} \mu} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
Shorthand $ \mathbf{\vert} $ Extended $ \mathbf{\vert} $ Normalized $ \mathbf{\vert} $ Un-Normalized
$ \mathbf{\vert} $ Shorthand $ \mathbf{\vert} $ $ \mathbf{\vert} $
$ \hat{\mathbf{f}} $ = $ \left. \hat{\mathbf{f}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} $ = $ \left. \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{ {\boldsymbol{\lambda}}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} $ = $ \left. \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{{{\boldsymbol {x} }}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{t} $ = $ \left. \hat{\mathbf{f}}_{t} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{t} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{t} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{\mathbf{f}}_{\mu} $ = $ \left. \hat{\mathbf{f}}_{\mu} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{\mathbf{f}}_{\mu} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{\mathbf{f}}_{\mu} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }}}^\mathsf{T} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} \right|_{\left( s,\tilde {\boldsymbol{z}}(s) ,\mu\right)} $ = $ \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }} {{\boldsymbol {x} }}} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }} t} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }} t} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{H}_{{{\boldsymbol {x} }} t} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }} t} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
$ \hat{H}_{{{\boldsymbol {x} }} \mu} $ = $ \left. \hat{H}_{{{\boldsymbol {x} }} \mu} \right|_{\left( s,\tilde {\boldsymbol{z}}(s),\mu \right)} $ = $ \hat{H}_{{{\boldsymbol {x} }} \mu} \left(t(s),\tilde {{\boldsymbol {x} }}(s),\tilde {\boldsymbol{\lambda}}(s),\mu\right) $ = $ \hat{H}_{{{\boldsymbol {x} }} \mu} \left(t(s),{{\boldsymbol {x} }}(t(s)), {\boldsymbol{\lambda}}(t(s)),\mu\right) $
Table 9.  Explanation of shorthand notation for $ \hat{\mathbf{f}} $ and first derivatives of $ \hat{H} $ evaluated at $ a $ used in (113), (114), and (115). Note that $ \left. \hat{H}_{ {\boldsymbol{\lambda}}} \right|_a = \hat{H}_{ {\boldsymbol{\lambda}}} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) = \left. \hat{\mathbf{f}}^\mathsf{T} \right|_a $
Shorthand $ \mathbf{\vert} $ Meaning $ \mathbf{\vert} $ Simplification
$ \left. \hat{H}_{{{\boldsymbol {x} }}} \right|_a $ = $ \hat{H}_{{{\boldsymbol {x} }}} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ H_{{{\boldsymbol {x} }}} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
$ \left. \hat{\mathbf{f}}^\mathsf{T} \right|_a $ = $ \hat{\mathbf{f}}^\mathsf{T} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ \mathbf{f}^\mathsf{T} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
$ \left. \hat{H}_t \right|_a $ = $ \hat{H}_t \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ H_t \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
$ \left. \hat{H}_\mu \right|_a $ = $ \hat{H}_\mu \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ H_\mu \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
Shorthand $ \mathbf{\vert} $ Meaning $ \mathbf{\vert} $ Simplification
$ \left. \hat{H}_{{{\boldsymbol {x} }}} \right|_a $ = $ \hat{H}_{{{\boldsymbol {x} }}} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ H_{{{\boldsymbol {x} }}} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
$ \left. \hat{\mathbf{f}}^\mathsf{T} \right|_a $ = $ \hat{\mathbf{f}}^\mathsf{T} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ \mathbf{f}^\mathsf{T} \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
$ \left. \hat{H}_t \right|_a $ = $ \hat{H}_t \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ H_t \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
$ \left. \hat{H}_\mu \right|_a $ = $ \hat{H}_\mu \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu \right) $ = $ H_\mu \left( a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a), {\boldsymbol{\pi}}\left(a,{{\boldsymbol {x} }}(a), {\boldsymbol{\lambda}}(a),\mu\right),\mu \right) $
Table 10.  Explanation of shorthand notation for first derivatives of $ {\boldsymbol{\sigma}} $ used in (113), (114), and (115)
Shorthand $ \mathbf{\vert} $ Meaning
$ {\boldsymbol{\sigma}}_{{{\boldsymbol {x} }}(a)} $ = $ {\boldsymbol{\sigma}}_{{{\boldsymbol {x} }}(a)} \left( a,{{\boldsymbol {x} }}(a),\mu \right) $
$ {\boldsymbol{\sigma}}_{a} $ = $ {\boldsymbol{\sigma}}_{a} \left( a,{{\boldsymbol {x} }}(a),\mu \right) $
$ {\boldsymbol{\sigma}}_{\mu} $ = $ {\boldsymbol{\sigma}}_{\mu} \left( a,{{\boldsymbol {x} }}(a),\mu \right) $
Shorthand $ \mathbf{\vert} $ Meaning
$ {\boldsymbol{\sigma}}_{{{\boldsymbol {x} }}(a)} $ = $ {\boldsymbol{\sigma}}_{{{\boldsymbol {x} }}(a)} \left( a,{{\boldsymbol {x} }}(a),\mu \right) $
$ {\boldsymbol{\sigma}}_{a} $ = $ {\boldsymbol{\sigma}}_{a} \left( a,{{\boldsymbol {x} }}(a),\mu \right) $
$ {\boldsymbol{\sigma}}_{\mu} $ = $ {\boldsymbol{\sigma}}_{\mu} \left( a,{{\boldsymbol {x} }}(a),\mu \right) $
Table 11.  Explanation of shorthand notation for $ \hat{\mathbf{f}} $ and first derivatives of $ \hat{H} $ evaluated at $ b $ used in (113), (114), and (115). Note that $ \left. \hat{H}_{ {\boldsymbol{\lambda}}} \right|_b = \hat{H}_{ {\boldsymbol{\lambda}}} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) = \left. \hat{\mathbf{f}}^\mathsf{T} \right|_b $
Shorthand $ \mathbf{\vert} $ Meaning $ \mathbf{\vert} $ Simplification
$ \left. \hat{H}_{{{\boldsymbol {x} }}} \right|_b $ = $ \hat{H}_{{{\boldsymbol {x} }}} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ H_{{{\boldsymbol {x} }}} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
$ \left. \hat{\mathbf{f}}^\mathsf{T} \right|_b $ = $ \hat{\mathbf{f}}^\mathsf{T} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ \mathbf{f}^\mathsf{T} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
$ \left. \hat{H}_t \right|_b $ = $ \hat{H}_t \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ H_t \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
$ \left. \hat{H}_\mu \right|_b $ = $ \hat{H}_\mu \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ H_\mu \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
Shorthand $ \mathbf{\vert} $ Meaning $ \mathbf{\vert} $ Simplification
$ \left. \hat{H}_{{{\boldsymbol {x} }}} \right|_b $ = $ \hat{H}_{{{\boldsymbol {x} }}} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ H_{{{\boldsymbol {x} }}} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
$ \left. \hat{\mathbf{f}}^\mathsf{T} \right|_b $ = $ \hat{\mathbf{f}}^\mathsf{T} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ \mathbf{f}^\mathsf{T} \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
$ \left. \hat{H}_t \right|_b $ = $ \hat{H}_t \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ H_t \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
$ \left. \hat{H}_\mu \right|_b $ = $ \hat{H}_\mu \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu \right) $ = $ H_\mu \left( b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b), {\boldsymbol{\pi}}\left(b,{{\boldsymbol {x} }}(b), {\boldsymbol{\lambda}}(b),\mu\right),\mu \right) $
Table 12.  Explanation of shorthand notation for first derivatives of $ {\boldsymbol{\psi}} $ used in (113), (114), and (115)
Shorthand $ \mathbf{\vert} $ Meaning
$ {\boldsymbol{\psi}}_{{{\boldsymbol {x} }}(b)} $ = $ {\boldsymbol{\psi}}_{{{\boldsymbol {x} }}(b)} \left( b,{{\boldsymbol {x} }}(b),\mu \right) $
$ {\boldsymbol{\psi}}_{b} $ = $ {\boldsymbol{\psi}}_{b} \left( b,{{\boldsymbol {x} }}(b),\mu \right) $
$ {\boldsymbol{\psi}}_{\mu} $ = $ {\boldsymbol{\psi}}_{\mu} \left( b,{{\boldsymbol {x} }}(b),\mu \right) $
Shorthand $ \mathbf{\vert} $ Meaning
$ {\boldsymbol{\psi}}_{{{\boldsymbol {x} }}(b)} $ = $ {\boldsymbol{\psi}}_{{{\boldsymbol {x} }}(b)} \left( b,{{\boldsymbol {x} }}(b),\mu \right) $
$ {\boldsymbol{\psi}}_{b} $ = $ {\boldsymbol{\psi}}_{b} \left( b,{{\boldsymbol {x} }}(b),\mu \right) $
$ {\boldsymbol{\psi}}_{\mu} $ = $ {\boldsymbol{\psi}}_{\mu} \left( b,{{\boldsymbol {x} }}(b),\mu \right) $
Table 13.  Equality between Jacobians of two-point boundary condition functions in normalized and un-normalized coordinates
Normalized $ \mathbf{\vert} $ Un-Normalized
$ \tilde {\boldsymbol{\Upsilon}}_{\tilde {\boldsymbol{z}}(0)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{ {\boldsymbol{z}}(a)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{1,\tilde {\boldsymbol{z}}(0)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{1, {\boldsymbol{z}}(a)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{2,\tilde {\boldsymbol{z}}(0)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{2, {\boldsymbol{z}}(a)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{\tilde {\boldsymbol{z}}(1)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{ {\boldsymbol{z}}(b)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{1,\tilde {\boldsymbol{z}}(1)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{1, {\boldsymbol{z}}(b)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{2,\tilde {\boldsymbol{z}}(1)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{2, {\boldsymbol{z}}(b)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{\mu}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{\mu}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{1,\mu}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{1,\mu}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{2,\mu}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{2,\mu}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
Normalized $ \mathbf{\vert} $ Un-Normalized
$ \tilde {\boldsymbol{\Upsilon}}_{\tilde {\boldsymbol{z}}(0)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{ {\boldsymbol{z}}(a)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{1,\tilde {\boldsymbol{z}}(0)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{1, {\boldsymbol{z}}(a)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{2,\tilde {\boldsymbol{z}}(0)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{2, {\boldsymbol{z}}(a)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{\tilde {\boldsymbol{z}}(1)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{ {\boldsymbol{z}}(b)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{1,\tilde {\boldsymbol{z}}(1)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{1, {\boldsymbol{z}}(b)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{2,\tilde {\boldsymbol{z}}(1)}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{2, {\boldsymbol{z}}(b)}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{\mu}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{\mu}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{1,\mu}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{1,\mu}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
$ \tilde {\boldsymbol{\Upsilon}}_{2,\mu}\left(\tilde {\boldsymbol{z}}(0),\tilde {\boldsymbol{z}}(1),\mu\right) $ = $ {\boldsymbol{\Upsilon}}_{2,\mu}\left( {\boldsymbol{z}}(a), {\boldsymbol{z}}(b),\mu\right) $
[1]

Soodabeh Asadi, Hossein Mansouri. A Mehrotra type predictor-corrector interior-point algorithm for linear programming. Numerical Algebra, Control & Optimization, 2019, 9 (2) : 147-156. doi: 10.3934/naco.2019011

[2]

Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2020, 13 (3) : 561-574. doi: 10.3934/dcdss.2020031

[3]

Matthias Gerdts, René Henrion, Dietmar Hömberg, Chantal Landry. Path planning and collision avoidance for robots. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 437-463. doi: 10.3934/naco.2012.2.437

[4]

Hernán Cendra, María Etchechoury, Sebastián J. Ferraro. Impulsive control of a symmetric ball rolling without sliding or spinning. Journal of Geometric Mechanics, 2010, 2 (4) : 321-342. doi: 10.3934/jgm.2010.2.321

[5]

Ying Wu, Zhaohui Yuan, Yanpeng Wu. Optimal tracking control for networked control systems with random time delays and packet dropouts. Journal of Industrial & Management Optimization, 2015, 11 (4) : 1343-1354. doi: 10.3934/jimo.2015.11.1343

[6]

Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Distributed optimal control of a nonstandard nonlocal phase field system with double obstacle potential. Evolution Equations & Control Theory, 2017, 6 (1) : 35-58. doi: 10.3934/eect.2017003

[7]

Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2019  doi: 10.3934/dcdss.2020213

[8]

Radouen Ghanem, Billel Zireg. Numerical solution of bilateral obstacle optimal control problem, where the controls and the obstacles coincide. Numerical Algebra, Control & Optimization, 2020, 10 (3) : 275-300. doi: 10.3934/naco.2020002

[9]

Dale McDonald. Sensitivity based trajectory following control damping methods. Numerical Algebra, Control & Optimization, 2013, 3 (1) : 127-143. doi: 10.3934/naco.2013.3.127

[10]

Ali Fuat Alkaya, Dindar Oz. An optimal algorithm for the obstacle neutralization problem. Journal of Industrial & Management Optimization, 2017, 13 (2) : 835-856. doi: 10.3934/jimo.2016049

[11]

Heping Dong, Deyue Zhang, Yukun Guo. A reference ball based iterative algorithm for imaging acoustic obstacle from phaseless far-field data. Inverse Problems & Imaging, 2019, 13 (1) : 177-195. doi: 10.3934/ipi.2019010

[12]

K. Aruna Sakthi, A. Vinodkumar. Stabilization on input time-varying delay for linear switched systems with truncated predictor control. Numerical Algebra, Control & Optimization, 2020, 10 (2) : 237-247. doi: 10.3934/naco.2019050

[13]

A. Daducci, A. Marigonda, G. Orlandi, R. Posenato. Neuronal Fiber--tracking via optimal mass transportation. Communications on Pure & Applied Analysis, 2012, 11 (5) : 2157-2177. doi: 10.3934/cpaa.2012.11.2157

[14]

Muhammad Ajmal, Xiande Zhang. New optimal error-correcting codes for crosstalk avoidance in on-chip data buses. Advances in Mathematics of Communications, 2020  doi: 10.3934/amc.2020078

[15]

François Alouges, Antonio DeSimone, Luca Heltai, Aline Lefebvre-Lepot, Benoît Merlet. Optimally swimming stokesian robots. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1189-1215. doi: 10.3934/dcdsb.2013.18.1189

[16]

Ching-Lung Lin, Gunther Uhlmann, Jenn-Nan Wang. Optimal three-ball inequalities and quantitative uniqueness for the Stokes system. Discrete & Continuous Dynamical Systems - A, 2010, 28 (3) : 1273-1290. doi: 10.3934/dcds.2010.28.1273

[17]

George W. Patrick, Tyler Helmuth. Two rolling disks or spheres. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 129-140. doi: 10.3934/dcdss.2010.3.129

[18]

Thalya Burden, Jon Ernstberger, K. Renee Fister. Optimal control applied to immunotherapy. Discrete & Continuous Dynamical Systems - B, 2004, 4 (1) : 135-146. doi: 10.3934/dcdsb.2004.4.135

[19]

Ellina Grigorieva, Evgenii Khailov. Optimal control of pollution stock. Conference Publications, 2011, 2011 (Special) : 578-588. doi: 10.3934/proc.2011.2011.578

[20]

Hang-Chin Lai, Jin-Chirng Lee, Shuh-Jye Chern. A variational problem and optimal control. Journal of Industrial & Management Optimization, 2011, 7 (4) : 967-975. doi: 10.3934/jimo.2011.7.967

 Impact Factor: 

Metrics

  • PDF downloads (64)
  • HTML views (292)
  • Cited by (0)

Other articles
by authors

[Back to Top]