• Previous Article
    Optimal pricing and ordering policy for defective items under temporary price reduction with inspection errors and price sensitive demand
  • JIMO Home
  • This Issue
  • Next Article
    New Z-eigenvalue localization sets for tensors with applications
May  2022, 18(3): 2109-2128. doi: 10.3934/jimo.2021059

Stereo visual odometry based on dynamic and static features division

College of Missile Engineering, Rocket Force University of Engineering, Xi'an, Shaanxi 710025, China

* Corresponding author: Guangbin Cai

Received  June 2020 Revised  December 2020 Published  May 2022 Early access  March 2021

Fund Project: The first author is mainly supported by NSSF of China under Grant (No. 61773387)

Accurate camera pose estimation in dynamic scenes is an important challenge for visual simultaneous localization and mapping, and it is critical to reduce the effects of moving objects on pose estimation. To tackle this problem, a robust visual odometry approach in dynamic scenes is proposed, which can precisely distinguish between dynamic and static features. The key to the proposed method is combining the scene flow and the static features relative spatial distance invariance principle. Moreover, a new threshold is proposed to distinguish dynamic features.Then the dynamic features are eliminated after matching with the virtual map points. In addition, a new similarity calculation function is proposed to improve the performance of loop-closure detection. Finally, the camera pose is optimized after obtaining a closed loop. Experiments have been conducted on TUM datasets and actual scenes, which shows that the proposed method reduces tracking errors significantly and estimates the camera pose precisely in dynamic scenes.

Citation: Hui Xu, Guangbin Cai, Xiaogang Yang, Erliang Yao, Xiaofeng Li. Stereo visual odometry based on dynamic and static features division. Journal of Industrial and Management Optimization, 2022, 18 (3) : 2109-2128. doi: 10.3934/jimo.2021059
References:
[1]

P. F. Alcantarilla, J. J. Yebes, J. Almazán et. al., On combining visual slam and dense scene flow to increase the robustness of localization and mapping in dynamic environments, 2012 IEEE International Conference on Robotics and Automation, Saint Paul, Minnesota, USA, IEEE, 2012.

[2]

Y. AnB. Li and L. Wang, Calibration of a 3D laser rangefinder and a camera based on optimization solution, J. Ind. Manag. Optim., 17 (2021), 427-445.  doi: 10.3934/jimo.2019119.

[3]

A. AngeliD. Filliat and S. Doncieux, Fast and incremental method for loop-closure detection using bags of visual words, IEEE Transactions on Robotics, 24 (2008), 1027-1037. 

[4]

C. Bibby and I. Reid, Simultaneous localisation and mapping in dynamic environments (SLAMIDE) with reversible data association, Robotics: Science and Systems, Atlanta, Georgia, USA, 2007.

[5]

L. Bose and A. Richards, Fast Depth Edge Detection and Edge Based Rgb-D Slam, IEEE International Conference on Robotics and Automation, Stockholm, Sweden, IEEE, 2016.

[6]

C. CadenaL. Carlone and H. Carrillo, Simultaneous localization and mapping: Present, future, and the robust-perception age, IEEE Transactions on Robotics, 32 (2016), 1309-1332. 

[7]

C. Choi, A. J. Trevor and H. I. Christensen, Rgbd Edge Detection and Edge-Based Registration, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, IEEE, 2013.

[8]

A. J. DavisonI. D. Reid and N. D. Molton, MonoSLAM: Real-time single camera SLAM, IEEE Transactions on Pattern Analysis & Machine Intelligence, 29 (2007), 1052-1067. 

[9]

J. EngelV. Koltun and D. Cremers, Direct sparse odometry, IEEE Transactions on Pattern Analysis & Machine Intelligence, 40 (2018), 611-625. 

[10]

J. Engel, T. Schöps and D. Cremers, LSD-SLAM: Large-Scale Direct Monocular SLAM, European Conference on Computer Vision, Springer, Zürich, Switzerland, 2014.

[11]

J. Fan, On the Levenberg-Marquardt methods for convex constrained nonlinear equations, J. Ind. Manag. Optim., 9 (2013), 227-241.  doi: 10.3934/jimo.2013.9.227.

[12]

C. Forster, M. Pizzoli and D Scaramuzza, SVO: Fast Semi-Direct Monocular Visual Odometry, IEEE International Conference on Robotics and Automation, Hong Kong, China, IEEE, 2014.

[13]

C. ForsterZ. Zhang and M. Gassner, SVO: Semi-direct visual odometry for monocular and multicamera systems, IEEE Transactions on Robotics, 33 (2017), 249-265. 

[14]

J. Fuentes-PacheoJ. Ruiz-Ascencio and J. M. Rendón-Mancha, Visual simultaneous localization and mapping: A survey, Artificial Intelligence Review, 43 (2015), 55-81. 

[15]

D.-K. GuG.-P. Liu and G.-R. Duan, Robust stability of uncertain second-order linear time-varying systems, J. Franklin Inst., 356 (2019), 9881-9906.  doi: 10.1016/j.jfranklin.2019.09.014.

[16]

D.-K. Gu and D.-W. Zhang, Parametric control to second-order linear time-varying systems based on dynamic compensator and multi-objective optimization, Appl. Math. Comput., 365 (2020), 124681, 25 pp. doi: 10.1016/j.amc.2019.124681.

[17]

D. K. Gu and D. W. Zhang, A parametric method to design dynamic compensator for high-order quasi-linear systems, Nonlinear Dynamics, 100 (2020), 1379-1400. 

[18]

C. Kerl, J. Sturm and D. Cremers, Dense Visual Slam for Rgb-D Cameras, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, IEEE, 2013.

[19]

D. H. Kim and J. H. Kim, Image-Based Icp Algorithm for Visual Odometry using a Rgb-D Sensor in a Dynamic Environment, Robot Intelligence Technology and Applications, Gwangju, Korea, Springer, 2013.

[20]

D. H. Kim, S. B. Han and J. H. Kim, Visual Odometry Algorithm using an Rgb-D Sensor and Imu in a Highly Dynamic Environment, Robot Intelligence Technology and Applications, Beijing, China, Springer, 2015.

[21]

D. H. Kim and J. H. Kim, Effective background model-based rgb-d dense visual odometry in a dynamic environment, IEEE Transactions on Robotics, 32 (2016), 1565-1573. 

[22]

M. Labbe and F. Michaud, Appearance-based loop closure detection for online large-scale and long-term operation, IEEE Transactions on Robotics, 9 (2013), 734-745. 

[23]

S. Li and D. Lee, Rgb-d slam in dynamic environments using static point weighting, IEEE Robotics and Automation Letters, 2 (2017), 2263-2270. 

[24]

B. LiD. Yang and L. Deng, Visual vocabulary tree with pyramid TF-IDF scoring match scheme for loop closure detection, Acta Automatica Sinica, 37 (2011), 665-673. 

[25]

Y. LiG. Zhang and F. Wang, An improved loop closure detection algorithm based on historical model set, Robot, 37 (2015), 663-673. 

[26]

Z. L. LinG. L. Zhang and E. Yao, Sterero visual odometry based on motion object detection in the dynamic scene, Acta Optica Sinica, 37 (2017), 187-195. 

[27]

M. Lourakis and X. Zabulis, Model-Based Pose Estimation for Rigid Objects, International conference on computer vision systems, St. Petersburg, Russia, Springer, 2013.

[28]

R. Mur-ArtalJ. M. M. Montiel and J. D. Tardós, ORB-SLAM: A versatile and accurate monocular slam system, IEEE Transactions on Robotics, 31 (2015), 1147-1163. 

[29]

R. Mur-Artal and J. D. Tardós, ORB-SLAM2: An opensource slam system for monocular, stereo, and rgbd cameras, IEEE Transactions on Robotics, 335 (2017), 1255-1262. 

[30]

D. NistérO. Naroditsky and J. Bergen, Visual odometry for ground vehicle applications, Journal of Field Robotics, 23 (2006), 3-20. 

[31]

Z. Peng, Research on Vision-Based Ego-Motion Estimation and Environment Modeling in Dynamic Environment, Ph.D. dissertation, Zhejiang University, Hangzhou, China, 2013.

[32]

D. Scaramuzza and F. Fraundorfer, Visual odometry, IEEE Robotics & Automation Magazine, 18 (2011), 80-92. 

[33]

J. Sturm, N. Engelhard, F. Endres et. al., A Benchmark for the Evaluation of RGB-D SLAM Systems, IEEE International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, IEEE, 2012.

[34]

Y. SunM. Liu and M. Q. H. Meng, Improving rgbd slam in dynamic environments: A motion removal approach, Robotics and Autonomous Systems, 89 (2017), 110-122. 

[35]

W. Tan, H. Liu, Z. Dong et. al., Robust Monocular SLAM in Dynamic Environments, IEEE International Symposium on Mixed and Augmented Reality, Adelaide, Australia, IEEE, 2013.

[36]

G. YounesD. Asmar and E. Shammas, Keyframe-based monocular slam: Design, survey, and future directions, Robotics and Autonomous Systems, 98 (2017), 67-88. 

show all references

References:
[1]

P. F. Alcantarilla, J. J. Yebes, J. Almazán et. al., On combining visual slam and dense scene flow to increase the robustness of localization and mapping in dynamic environments, 2012 IEEE International Conference on Robotics and Automation, Saint Paul, Minnesota, USA, IEEE, 2012.

[2]

Y. AnB. Li and L. Wang, Calibration of a 3D laser rangefinder and a camera based on optimization solution, J. Ind. Manag. Optim., 17 (2021), 427-445.  doi: 10.3934/jimo.2019119.

[3]

A. AngeliD. Filliat and S. Doncieux, Fast and incremental method for loop-closure detection using bags of visual words, IEEE Transactions on Robotics, 24 (2008), 1027-1037. 

[4]

C. Bibby and I. Reid, Simultaneous localisation and mapping in dynamic environments (SLAMIDE) with reversible data association, Robotics: Science and Systems, Atlanta, Georgia, USA, 2007.

[5]

L. Bose and A. Richards, Fast Depth Edge Detection and Edge Based Rgb-D Slam, IEEE International Conference on Robotics and Automation, Stockholm, Sweden, IEEE, 2016.

[6]

C. CadenaL. Carlone and H. Carrillo, Simultaneous localization and mapping: Present, future, and the robust-perception age, IEEE Transactions on Robotics, 32 (2016), 1309-1332. 

[7]

C. Choi, A. J. Trevor and H. I. Christensen, Rgbd Edge Detection and Edge-Based Registration, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, IEEE, 2013.

[8]

A. J. DavisonI. D. Reid and N. D. Molton, MonoSLAM: Real-time single camera SLAM, IEEE Transactions on Pattern Analysis & Machine Intelligence, 29 (2007), 1052-1067. 

[9]

J. EngelV. Koltun and D. Cremers, Direct sparse odometry, IEEE Transactions on Pattern Analysis & Machine Intelligence, 40 (2018), 611-625. 

[10]

J. Engel, T. Schöps and D. Cremers, LSD-SLAM: Large-Scale Direct Monocular SLAM, European Conference on Computer Vision, Springer, Zürich, Switzerland, 2014.

[11]

J. Fan, On the Levenberg-Marquardt methods for convex constrained nonlinear equations, J. Ind. Manag. Optim., 9 (2013), 227-241.  doi: 10.3934/jimo.2013.9.227.

[12]

C. Forster, M. Pizzoli and D Scaramuzza, SVO: Fast Semi-Direct Monocular Visual Odometry, IEEE International Conference on Robotics and Automation, Hong Kong, China, IEEE, 2014.

[13]

C. ForsterZ. Zhang and M. Gassner, SVO: Semi-direct visual odometry for monocular and multicamera systems, IEEE Transactions on Robotics, 33 (2017), 249-265. 

[14]

J. Fuentes-PacheoJ. Ruiz-Ascencio and J. M. Rendón-Mancha, Visual simultaneous localization and mapping: A survey, Artificial Intelligence Review, 43 (2015), 55-81. 

[15]

D.-K. GuG.-P. Liu and G.-R. Duan, Robust stability of uncertain second-order linear time-varying systems, J. Franklin Inst., 356 (2019), 9881-9906.  doi: 10.1016/j.jfranklin.2019.09.014.

[16]

D.-K. Gu and D.-W. Zhang, Parametric control to second-order linear time-varying systems based on dynamic compensator and multi-objective optimization, Appl. Math. Comput., 365 (2020), 124681, 25 pp. doi: 10.1016/j.amc.2019.124681.

[17]

D. K. Gu and D. W. Zhang, A parametric method to design dynamic compensator for high-order quasi-linear systems, Nonlinear Dynamics, 100 (2020), 1379-1400. 

[18]

C. Kerl, J. Sturm and D. Cremers, Dense Visual Slam for Rgb-D Cameras, 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, IEEE, 2013.

[19]

D. H. Kim and J. H. Kim, Image-Based Icp Algorithm for Visual Odometry using a Rgb-D Sensor in a Dynamic Environment, Robot Intelligence Technology and Applications, Gwangju, Korea, Springer, 2013.

[20]

D. H. Kim, S. B. Han and J. H. Kim, Visual Odometry Algorithm using an Rgb-D Sensor and Imu in a Highly Dynamic Environment, Robot Intelligence Technology and Applications, Beijing, China, Springer, 2015.

[21]

D. H. Kim and J. H. Kim, Effective background model-based rgb-d dense visual odometry in a dynamic environment, IEEE Transactions on Robotics, 32 (2016), 1565-1573. 

[22]

M. Labbe and F. Michaud, Appearance-based loop closure detection for online large-scale and long-term operation, IEEE Transactions on Robotics, 9 (2013), 734-745. 

[23]

S. Li and D. Lee, Rgb-d slam in dynamic environments using static point weighting, IEEE Robotics and Automation Letters, 2 (2017), 2263-2270. 

[24]

B. LiD. Yang and L. Deng, Visual vocabulary tree with pyramid TF-IDF scoring match scheme for loop closure detection, Acta Automatica Sinica, 37 (2011), 665-673. 

[25]

Y. LiG. Zhang and F. Wang, An improved loop closure detection algorithm based on historical model set, Robot, 37 (2015), 663-673. 

[26]

Z. L. LinG. L. Zhang and E. Yao, Sterero visual odometry based on motion object detection in the dynamic scene, Acta Optica Sinica, 37 (2017), 187-195. 

[27]

M. Lourakis and X. Zabulis, Model-Based Pose Estimation for Rigid Objects, International conference on computer vision systems, St. Petersburg, Russia, Springer, 2013.

[28]

R. Mur-ArtalJ. M. M. Montiel and J. D. Tardós, ORB-SLAM: A versatile and accurate monocular slam system, IEEE Transactions on Robotics, 31 (2015), 1147-1163. 

[29]

R. Mur-Artal and J. D. Tardós, ORB-SLAM2: An opensource slam system for monocular, stereo, and rgbd cameras, IEEE Transactions on Robotics, 335 (2017), 1255-1262. 

[30]

D. NistérO. Naroditsky and J. Bergen, Visual odometry for ground vehicle applications, Journal of Field Robotics, 23 (2006), 3-20. 

[31]

Z. Peng, Research on Vision-Based Ego-Motion Estimation and Environment Modeling in Dynamic Environment, Ph.D. dissertation, Zhejiang University, Hangzhou, China, 2013.

[32]

D. Scaramuzza and F. Fraundorfer, Visual odometry, IEEE Robotics & Automation Magazine, 18 (2011), 80-92. 

[33]

J. Sturm, N. Engelhard, F. Endres et. al., A Benchmark for the Evaluation of RGB-D SLAM Systems, IEEE International Conference on Intelligent Robots and Systems, Vilamoura, Portugal, IEEE, 2012.

[34]

Y. SunM. Liu and M. Q. H. Meng, Improving rgbd slam in dynamic environments: A motion removal approach, Robotics and Autonomous Systems, 89 (2017), 110-122. 

[35]

W. Tan, H. Liu, Z. Dong et. al., Robust Monocular SLAM in Dynamic Environments, IEEE International Symposium on Mixed and Augmented Reality, Adelaide, Australia, IEEE, 2013.

[36]

G. YounesD. Asmar and E. Shammas, Keyframe-based monocular slam: Design, survey, and future directions, Robotics and Autonomous Systems, 98 (2017), 67-88. 

Figure 1.  Stereo camera model
Figure 2.  Generation of a visual vocabulary tree
Figure 3.  Overview of the proposed algorithm in dynamic scenes
Figure 4.  Classification of the scene flow based on angles [26]
Figure 5.  Invariance of the relative spatial distance of the static points
Figure 6.  Construction of the virtual map points
Figure 7.  Three static features selected by the algorithm
Figure 8.  Dynamic features obtained by the algorithm
Figure 9.  Experiment scene sets
Figure 10.  Experimental results of ORB-VO in lab scenes
Figure 11.  Experimental results of the proposed method in lab scenes
Figure 12.  Loop-closure detection result of the inverse proportional function
Figure 13.  Loop-closure detection result of the negative exponential power function
Figure 14.  Loop-closure detection result of the negative exponential power function
Figure 15.  Comparisons between estimated trajectories and the ground truth in walking sequences
Figure 16.  Comparisons between estimated trajectories and the ground truth in sitting sequences
Table 1.  Translation drift and rotational drift of VO method on TUM dataset
Sequences RMSE of translational drift [m/s] RMSE of rotational drift [$ ^{\circ} $/s]
DVO BaMVO SPW-VO Our Method DVO BaMVO SPW-VO Our Method
sitting-static 0.0157 0.0248 0.0231 0.0112 0.6084 0.6977 0.7228 0.3356
sitting-xyz 0.0453 0.0482 0.0219 0.0132 1.4980 1.3885 0.8466 0.5753
sitting-rpy 0.1735 0.1872 0.0843 0.0280 6.0164 5.9834 5.6258 0.6811
sitting-halfsphere 0.1005 0.0589 0.0389 0.0151 4.6490 2.8804 1.8836 0.6103
walking-static 0.3818 0.1339 0.0327 0.0293 6.3502 2.0833 0.8085 0.5500
walking-xyz 0.4360 0.2326 0.0651 0.1034 7.6669 4.3911 1.6442 2.3273
walking-rpy 0.4038 0.3584 0.2252 0.2143 7.0662 6.3898 5.6902 3.9555
walking-halfsphere 0.2628 0.1738 0.0527 0.1061 5.2179 4.2863 2.4048 2.2983
Sequences RMSE of translational drift [m/s] RMSE of rotational drift [$ ^{\circ} $/s]
DVO BaMVO SPW-VO Our Method DVO BaMVO SPW-VO Our Method
sitting-static 0.0157 0.0248 0.0231 0.0112 0.6084 0.6977 0.7228 0.3356
sitting-xyz 0.0453 0.0482 0.0219 0.0132 1.4980 1.3885 0.8466 0.5753
sitting-rpy 0.1735 0.1872 0.0843 0.0280 6.0164 5.9834 5.6258 0.6811
sitting-halfsphere 0.1005 0.0589 0.0389 0.0151 4.6490 2.8804 1.8836 0.6103
walking-static 0.3818 0.1339 0.0327 0.0293 6.3502 2.0833 0.8085 0.5500
walking-xyz 0.4360 0.2326 0.0651 0.1034 7.6669 4.3911 1.6442 2.3273
walking-rpy 0.4038 0.3584 0.2252 0.2143 7.0662 6.3898 5.6902 3.9555
walking-halfsphere 0.2628 0.1738 0.0527 0.1061 5.2179 4.2863 2.4048 2.2983
Table 2.  RMSE of the ATE of camera pose estimation (m$ ^{-1} $)
Sequences ORB-SLAM2 MR-SLAM SPW-SLAM SF-SLAM Our Method
sitting-static 0.0082 0.0081 0.0073
sitting-xyz 0.0094 0.0482 0.0397 0.0101 0.0090
sitting-rpy 0.0197 0.0180 0.0162
sitting-halfsphere 0.0211 0.0470 0.0432 0.0239 0.0164
walking-static 0.1028 0.0656 0.0261 0.0120 0.0108
walking-xyz 0.4278 0.0932 0.0601 0.2251 0.0884
walking-rpy 0.7407 0.1333 0.1791 0.1961 0.3620
walking-halfsphere 0.4939 0.1252 0.0489 0.0423 0.0411
Sequences ORB-SLAM2 MR-SLAM SPW-SLAM SF-SLAM Our Method
sitting-static 0.0082 0.0081 0.0073
sitting-xyz 0.0094 0.0482 0.0397 0.0101 0.0090
sitting-rpy 0.0197 0.0180 0.0162
sitting-halfsphere 0.0211 0.0470 0.0432 0.0239 0.0164
walking-static 0.1028 0.0656 0.0261 0.0120 0.0108
walking-xyz 0.4278 0.0932 0.0601 0.2251 0.0884
walking-rpy 0.7407 0.1333 0.1791 0.1961 0.3620
walking-halfsphere 0.4939 0.1252 0.0489 0.0423 0.0411
[1]

Alain Chenciner, Jacques Féjoz. The flow of the equal-mass spatial 3-body problem in the neighborhood of the equilateral relative equilibrium. Discrete and Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 421-438. doi: 10.3934/dcdsb.2008.10.421

[2]

Zvi Artstein. Invariance principle in the singular perturbations limit. Discrete and Continuous Dynamical Systems - B, 2019, 24 (8) : 3653-3666. doi: 10.3934/dcdsb.2018309

[3]

Giuseppe Capobianco, Tom Winandy, Simon R. Eugster. The principle of virtual work and Hamilton's principle on Galilean manifolds. Journal of Geometric Mechanics, 2021, 13 (2) : 167-193. doi: 10.3934/jgm.2021002

[4]

Navin Keswani. Homotopy invariance of relative eta-invariants and $C^*$-algebra $K$-theory. Electronic Research Announcements, 1998, 4: 18-26.

[5]

Dominique Zosso, Jing An, James Stevick, Nicholas Takaki, Morgan Weiss, Liane S. Slaughter, Huan H. Cao, Paul S. Weiss, Andrea L. Bertozzi. Image segmentation with dynamic artifacts detection and bias correction. Inverse Problems and Imaging, 2017, 11 (3) : 577-600. doi: 10.3934/ipi.2017027

[6]

Chun-Xiang Guo, Guo Qiang, Jin Mao-Zhu, Zhihan Lv. Dynamic systems based on preference graph and distance. Discrete and Continuous Dynamical Systems - S, 2015, 8 (6) : 1139-1154. doi: 10.3934/dcdss.2015.8.1139

[7]

Amadeu Delshams, Josep J. Masdemont, Pablo Roldán. Computing the scattering map in the spatial Hill's problem. Discrete and Continuous Dynamical Systems - B, 2008, 10 (2&3, September) : 455-483. doi: 10.3934/dcdsb.2008.10.455

[8]

Yunsai Chen, Zhao Yang, Liang Ma, Peng Li, Yongjie Pang, Xin Zhao, Wenyi Yang. Efficient extraction algorithm for local fuzzy features of dynamic images. Discrete and Continuous Dynamical Systems - S, 2019, 12 (4&5) : 1311-1325. doi: 10.3934/dcdss.2019090

[9]

Jian Zhai, Jianping Fang, Lanjun Li. Wave map with potential and hypersurface flow. Conference Publications, 2005, 2005 (Special) : 940-946. doi: 10.3934/proc.2005.2005.940

[10]

Christine Bachoc, Gilles Zémor. Bounds for binary codes relative to pseudo-distances of $k$ points. Advances in Mathematics of Communications, 2010, 4 (4) : 547-565. doi: 10.3934/amc.2010.4.547

[11]

Marian Gidea, Yitzchak Shmalo. Combinatorial approach to detection of fixed points, periodic orbits, and symbolic dynamics. Discrete and Continuous Dynamical Systems, 2018, 38 (12) : 6123-6148. doi: 10.3934/dcds.2018264

[12]

Iordanka N. Panayotova, Pai Song, John P. McHugh. Spatial stability of horizontally sheared flow. Conference Publications, 2013, 2013 (special) : 611-618. doi: 10.3934/proc.2013.2013.611

[13]

Yunkyong Hyon, José A. Carrillo, Qiang Du, Chun Liu. A maximum entropy principle based closure method for macro-micro models of polymeric materials. Kinetic and Related Models, 2008, 1 (2) : 171-184. doi: 10.3934/krm.2008.1.171

[14]

Seyedeh Marzieh Ghavidel, Wolfgang M. Ruess. Flow invariance for nonautonomous nonlinear partial differential delay equations. Communications on Pure and Applied Analysis, 2012, 11 (6) : 2351-2369. doi: 10.3934/cpaa.2012.11.2351

[15]

Xianchao Xiu, Ying Yang, Wanquan Liu, Lingchen Kong, Meijuan Shang. An improved total variation regularized RPCA for moving object detection with dynamic background. Journal of Industrial and Management Optimization, 2020, 16 (4) : 1685-1698. doi: 10.3934/jimo.2019024

[16]

Wacław Marzantowicz, Piotr Maciej Przygodzki. Finding periodic points of a map by use of a k-adic expansion. Discrete and Continuous Dynamical Systems, 1999, 5 (3) : 495-514. doi: 10.3934/dcds.1999.5.495

[17]

Yaofeng Su. Almost surely invariance principle for non-stationary and random intermittent dynamical systems. Discrete and Continuous Dynamical Systems, 2019, 39 (11) : 6585-6597. doi: 10.3934/dcds.2019286

[18]

Jian Song, Meng Wang. Stochastic maximum principle for systems driven by local martingales with spatial parameters. Probability, Uncertainty and Quantitative Risk, 2021, 6 (3) : 213-236. doi: 10.3934/puqr.2021011

[19]

Timothy Blass, Rafael De La Llave, Enrico Valdinoci. A comparison principle for a Sobolev gradient semi-flow. Communications on Pure and Applied Analysis, 2011, 10 (1) : 69-91. doi: 10.3934/cpaa.2011.10.69

[20]

Lok Ming Lui, Tsz Wai Wong, Wei Zeng, Xianfeng Gu, Paul M. Thompson, Tony F. Chan, Shing Tung Yau. Detection of shape deformities using Yamabe flow and Beltrami coefficients. Inverse Problems and Imaging, 2010, 4 (2) : 311-333. doi: 10.3934/ipi.2010.4.311

2021 Impact Factor: 1.411

Metrics

  • PDF downloads (604)
  • HTML views (534)
  • Cited by (0)

[Back to Top]