AUTOMATIC TRACKING AND POSITIONING ALGORITHM FOR MOVING TARGETS IN COMPLEX ENVIRONMENT

. Nowadays, when moving targets are located in complex environment, the positioning algorithm takes longer time, and the result is not consis- tent with the actual positioning of the moving target, which has the problem of low positioning eﬃciency and inaccurate positioning results. In this paper, a moving target automatic tracking and positioning algorithm is proposed in the complex environment, which establishes the geodetic coordinate system and the space rectangular coordinate system, and completes the transformation between the geodetic coordinate system and the rectangular coordinate system, so as to improve the accuracy of the positioning result. The signal is rebuilt and the MIMO radar positioning model is used to complete the auto- matic tracking and positioning of the moving target in complex environment, to reduce the time consuming. The experimental results show that the pro- posed method can quickly and accurately track and locate the moving target in complex environment.


1.
Introduction. With the development of the level of science and technology, the means of positioning also changed [15]. At a time when the level of science and technology is low, it relies mainly on the investigators do the on-the-spot investigation or close range investigation to complete the positioning of the enemy or the threat target [5]. In this period, the way of investigation can give full play to people's activity role. Even in the modern war with rapid development of science and technology, it is still an important way of investigation [22]. However, because human subjectivity will bring greater errors and combining with the hard working conditions and low efficiency, in most cases, people can not get close to the threat targets, which brings many difficulties to the investigation [18,20]. With the progress of science and technology, it has the optical instrument, people can use it to observe the distant target [12]. However, optical instruments depend on the light emitted by the observed target or the light emitted by other illuminating objects to observe the reflected light of the target, which limits the conditionse.g. it can not observe by the optical instrument in the night; in the daytime, in the cloud, rain, fog, snow and other poor weather conditions, it greatly reduces the ability of observation; in good weather conditions, the human eye through an optical instrument capable of observing the distance is very limited [21,25]. After entering the era of information, this kind of mechanical observation and positioning obviously can not meet the requirement of the application of [16]. With the development and application of radar during World War II, in late 1930s, the active radar system began to detect a few thousand to thousand meters, become a basic tool for modern military war [17]. Radar is different from optical instruments. It launches electromagnetic waves to detect targets. As long as targets can reflect the electromagnetic waves emitted by them, it is possible to achieve observations, and detect long distance target no matter during the day or at night [2,3,[7][8][9][10][11]23]. The radar system can get the distance between radar and target by measuring the time delay between the transmitted wave and the echo. The angle of the target is measured by the directivity of the antenna, and the velocity of the target can be calculated by the change rate of the transmitted wave and the carrier frequency of the echo [24]. When the moving target positioning algorithm is used to locate the moving target in complex environment, the positioning time is longer. The location result is not consistent with the actual location of the moving target. It has the problem of low location efficiency and inaccurate location result [1,4,6,13,14,19,26,27]. In this paper, an automatic tracking and positioning algorithm for moving targets in complex environment is proposed in this paper in order to solve the above problems. 2. camera model and coordinate transformation.
2.1. Camera model. Optical imaging is a small hole imaging model based on the theory of perspective projection. If the lens distortion is taken into account, it can be divided into nonlinear and linear models. Due to the advantages of easy understanding and calibration of projective geometry, radiation geometry and Euclidean geometry, the geometric transformation theory can be used to establish approximate linear model. The projection of object to the imaging plane in three-dimensional space is the imaging model. The ideal projection model is the model of center projection in optics, namely the linear model, which is the camera model of all the scenery imaging onto the plane through the optical axis of the camera, also known as pin hole model. In the pinhole model, it is assumed that the reflected light on the surface of the object is projected onto the image plane through a pinhole to Figure 1. Model of pin hole imaging satisfy the linear propagation condition of the light. The lens transformation model based on pin hole imaging is the most basic model, and it is also the most common ideal model. The camera pinhole model mainly includes the photo center (the projection center), an imaging surface and the optical axis, as shown in Figure 1. Among them, (X C , Y C , Z C ) is the camera coordinate system; (u, o, v) is the image pixel coordinate system, p(x, y) is the image point coordinate, and p(X C , Y C , Z C ) is the coordinate of the camera in the geodetic coordinate system. Based on the imaging principle and imaging characteristics of a camera, the brightness of every point on the photo reflects the intensity of the reflected light of the corresponding points on the surface of the object, that is, the location of the image points on the image, which is related to the geometric position of the corresponding points on the surface of the space object. The geometric position relation is determined by the camera geometric model. The vertical distance CO between camera optical center C to the photograph is main distance of camera; CO = f, f is the focal length and point O is principal point of photograph. The reason why the camera is made is that the midpoint O of the photo does not coincide with the photo's main point o . There is a slight deviation x 0 , y 0 . This error is called the main point deviation, and (x 0 , y 0 , f ) is called the intrinsic parameter of camera.
To achieve the solution from the coordinates of image points to the coordinates of the target points, it first needs to determine the spatial position and direction of the camera beam at the moment of exposure, that is, the position and orientation of the camera coordinate system in the geodetic coordinate system. The position of the camera coordinate system in the geodetic coordinate system is determined by the three-dimensional coordinate (X C , Y C , Z C ) of the projection center in the geodetic coordinate system, that is, the line element. The azimuth is determined by the azimuth element (ϕ, ω, κ) of the camera coordinate system in the geodetic coordinate system. (X C , Y C , Z C ) is used to determine the position of the camera ϕ, ω, κ, is used to determine the orientation of camera, where, the declination angle ϕ and the obliquity ω can determine the direction of mail optical axis, and the rotation angle κ can determine the orientation of photos in the image plane. The angles of the angle system ϕ, ω, κ are defined as shown in Figure 2, of which (X W , Y W , Z W ) is the WGS-84 coordinate system; (X C , Y C , Z C ) is the camera coordinate; (u, v, w) is the assistant camera coordinate system; (x, o, y) is the image's physical coordinate system; The drift angle ϕ is the angle between the projection of main optical axis on the XZ plane and the axis Z. To observe along the positive direction of Y axis, the counter clockwise is the positive direction; rotation angle k is the angle between the plane composed of the main axis and Y axis and image plane intersection and 18 axis and the intersection of image plane and the Y axis of Cartesian coordinate system, in clockwise direction, starting from the intersection, counterclockwise is positive direction; the obliquity ω is the angle of the main axis and its projection on the plane XZ. it is defined that along with the positive direction of x axis to observe to the origin of the coordinate, counter clockwise is specified as the positive direction.
2.2. The establishment of the coordinate system and the coordinate transformation. Image coordinate system is a rectangular coordinate system defined on two-dimensional images. It is divided into two types: physical coordinate system based on physical length and pixel coordinate system in pixels. Customarily, the origin of the image physical coordinate is defined at the center of the image. The original point of the image pixel coordinate system is generally defined at the top 1254 RONG LIU AND SAINI JONATHAN TISHARI x axis and y axis are parallel to the x axis and y axis of the camera coordinate system, which is a plane right angle coordinate system, the unit is millimeter. Image pixel coordinate system is a plane rectangular coordinate system fixed on the image in pixels. Its origin is located at the upper left corner of the image. x f and y f are parallel to the x axis and y axis of the physical coordinate system. For digital images, they are row and column directions respectively. For the convenience of calculation, in this paper, the physical coordinate system of image, that is, the plane rectangular coordinate system xoy is selected, the position of image points on the photo plane is decided by the plane coordinate (x, y) in the image plane. The conversion relation between the image physical coordinate system (r, c) pixel and the image pixel coordinate (X i , Y i ) mm is as follows: Where, (r 0 , c 0 ) is the intersection point coordinate of the image plane and the optical axis, and S x and S y are the magnification coefficients of the camera CCD photosensitive component in the x axis and the y axis direction.
The geodetic coordinate is converted to spatial Cartesian coordinate as: The spatial Cartesian coordinate is converted to geodetic coordinate as.
Where, N is the curvature radius of the ellipsoid unitary ring, N = a/ 1 − e 2 sin 2 B, e is the first eccentricity of the ellipsoid, and a is the ellipsoidal short half axis radius.
If the movement of the origin of the different coordinate system (∆X, ∆Y, ∆Z) is known, it can use the next formula to calculate the coordinates.
Where, B 1 and L 1 are the latitude and longitude of the original coordinate; M = a(1 − e 2 )/(1 − e 2 sin 2 B 1 ) 3/2 is the curvature radius of the meridian circle of original coordinate system; N = a/(1 − e 2 sin 2 B 1 ) 1/2 the curvature radius of the prime vertical of original coordinate system; ∆a = a 2 − a 1 , a 1 is the elliptical semimajor axis of original coordinate; a 2 is the elliptical semimajor axis of new coordinate system.
c 1 is the ellipticity of original coordinate; e 2 is the eccentricity of ellipse square of original coordinate; 1 − sin 1 = 206264.8062. After ∆B and ∆L are solved, the coordinate values in the new form of geodetic coordinate system are as: Where, B 2 and L 2 are the latitude and longitude of new coordinate.
3. Automatic tracking and positioning algorithm for moving target.
In formula (7), s is a N × 1 column vector consisting of projection coefficient s i =< X, Ψ i >. To some extent, x and s are equivalent, and are different forms of representation in different domains. If the coefficient s of the signal x on the base Ψ has only K non zero values, it is called the signal x has K sparsity.
The sparse approximation of signals is essentially two tasks: first, according to the structure of the original signal, the best atoms are selected from the given atomic library; then the best combination is selected from the best atoms. For a given set D = {g i , i = 1, 2, . . . , I}, the atom g i is the unit vector of the entire Hilbert space H = R N . For any signal f ∈ H, it needs to select m atoms from the set D to make m set of approximations to the signal.
Where, I m is the subscript set of the atom g γ , and c γ is the coefficient of the sparse decomposition. The approximation error is defined as follows: In order to meet the requirement of sparsity, under the condition of satisfying formula (8), it is necessary to select a set of atoms corresponding to the least zero decomposition coefficients from various combinations, which solves the problem of sparse representation of signals. This problem is mathematically expressed as: Assuming that the signal x is a compressible signal with a length of N , and its sparsity is expressed as: s is a sparse vector of signal x. The measurement vector y of the length of M is obtained by the original signal x projected to the M × N (M < N ) dimensional measurement matrix Φ.
The formula (11) is brought into the formula (12): In formula (13), Θ = ΦΨ, which is the M × N matrix, is called the perception matrix. For any K sparse signal x, if there is a minimum constant ε k ∈ (0, 1), it satisfies: Then, it is called the perception matrix Θ to satisfy the RIP property. Where, T ⊂ {1, . . . , N } , |T | ≤ K, and Θ T are the K × |T | dimensional submatrices of the related columns that have been subscribed to T in matrix Θ.
If the observation matrix Φ is incoherent with the sparse transform base Ψ, then Θ = ΦΨ can greatly satisfy the RIP property. The coherence coefficients are defined as: In formula (15), ϕ i and Ψ j are column vectors of matrix ϕ i and Ψ, respectively. The size of the coherence coefficient µ indicates the intensity of the coherence between the matrices. The lower the correlation between the two (that is, the smaller the µ is) is, the less the measured value is required.
The signal x is a N dimensional discrete signal, that is, x = [x 1 , x 2 , . . . , x N ], and its l−norm is: When l = 0, the 0-norm x 0 of the obtained signal x indicates the number of non zero terms in the vector x, that is, the sparsity of the signal. 0-norm (l 0 norm) fully reflects the number of non zero elements in the signal, and makes the result as sparse as possible. So the most direct signal reconstruction method is to solve the optimization problem of the minimum l 0 norm. s = arg min x 0 s.t. y = Φx (17) Generally, the signal is not strictly sparse, but it is compressible. For such a signal, the problem of the underdetermined equation of formula (13) is transformed into the problem described in formula (18). s = arg min s 0 s.t. y = ΦΨs = Θs (18) The solution is obtained by s , and then the reconstructed signal x is obtained by the sparse transformation relation x = Ψs.
3.2. MIMO radar positioning model. The MIMO radar emits orthogonal signals from N T emitters. After the target is scattered by the target, some signals are received by N R receiving array elements. Due to the orthogonal relationship between the emission waveforms, a plurality of transmit signals in the space can maintain their independence, so from the transmitting array to the receiving array has formed N T N R channels in the air, each channel corresponds to a specific emission element to the target, the target path to a specific combination of array element. The delay of the channel is related to the location of the target and the transmitter. Figure 3 is shown as a simple model for MIMO radar positioning.
It is assumed that the space object is an ideal point, and the two-dimensional plane of the target is regarded as a grid structure. The word grid is derived from the geographic information system. The grid data structure is a simple and intuitive spatial data structure, also known as the grid data structure. If the ground is divided into L = m × n small squares, the location of each small square is known. At this time, whether there is a target object in the small square, it can be expressed by the matrix (the matrix element is 1, indicating that the target exists, and the Figure 3. MIMO radar model element is 0, indicating that there is no target object).According to this idea, the space of the object is taken as a plane, the plane is divided into a grid, and the target is locked in a grid.
The MIMO radar emits multiple orthogonal linear frequency modulation signals at the emitter. Supposing that the narrow band FM signal launched at the transmitter i is x i (t) and the mathematical expressions are formulated.
Where, µ = B/T is the chirp rate, f p = 1/T , B is the signal bandwidth, T is the pulse time, j is the imaginary unit. The signal is transmitted in the air, and it is reflected back to the receiving end after encountering the target. σ ijk (t) is defined as the attenuation coefficient of the i th transmitters that reflect back to jth receivers through kth target. τ ijk is the signal delay, the formula is as follows: Where, c is the speed of light (c = 3 × 10 8 m/s), P k , T i and T j represent the coordinates of the target, the transmitting antenna and the receiving antenna. The signals of K targets are shown as formula (21) from ith emitter to jth receiving end.
It is defined that a target state space is discrete into a grid of L values, which is represented as ξ l , 1, . . . , L . In case of the effect of the target state, let ψ ijk (n) = u(n)e j2π[µ(t−τ ijk ) 2 +fp(t−τ ijk )] ; otherwise, ψ ijk (n) = 0. Similarly, if ξ l is the state vector of the k th target, let s ijl (n) = σ ijl (n); otherwise s ijl (n) = 0. The element s ijl (n) and ψ ijl (n) are determined by four variables, as the four dimensional matrix and t y ij (n) is the three-dimensional matrix. The following elements is ordered as the order of different emitting end to the same receiving end, that is: The matrices y(n) and s l (n) are both N T N R × 1 -dimensional and ψ l (n) is the N T N R × N T N R dimensional diagonal matrices. The number l = 1, . . . , L of states is replaced into s l (n) and ψ l (n), and arranged according to formula (25) and (26): At this point, s(n) is a LN T N R × 1 dimensional matrix, and ψ(n) is a N T N R × LN T N R dimensional matrix. Further, the sample point n is valued as 1, 2, . . . , N in turn. Since s(n) is the attenuation coefficient, it is independent of the sampling point n, only the values of the variables Y and ψ are arranged according to formula (27) and (28), respectively.
Finally, it can get the formula (29):  Table 1 and Table 2. The longitude range is from 118 • 05.4' to 118 • 06', and latitude ranges from 24 • 32.5' to 24 • 33.6'. The computed locations and actual locations of 8 ships in Table 1 and Table 2     The error of the calculated longitude and the actual longitude of a ship is as shown in Figure 5. The diamond represents the actual longitude of the ship, and the triangle represents the calculated longitude of the ship. The error of the calculated latitude and the actual latitude of the ship is as shown in Figure 6. The diamond represents the actual latitude of the ship, and the triangle represents the calculated latitude of the ship.    Figure 7(a) and Figure 7(b) are the target location algorithms based on wireless sensor network and target location algorithm based on curve fitting respectively. Figure 7(c) is the time used to automatically track and locate the moving target by automatic tracking and positioning algorithm of moving target in complex environment. Figure 7(a), 7(b) and 7(c) show that the time of using the automatic tracking positioning algorithm for moving target under complex environment is less that that of the automatic tracking and positioning algorithm based on wireless sensor network and that based on curve fitting algorithm, indicating that the location of automatic tracking and positioning algorithm for moving target in complex environment is less time and more efficient.

5.
Conclusions. Radar is an important weapon in national defense fighting and defense. It plays an important role in the military and plays a more and more important role in the large stage of civilian use. Locating targets is the most basic function of radar, especially in modern electronic warfare, precise location of targets is helpful for attacking enemy weapons accurately, and provides a powerful guarantee for destroying enemy planes accurately. At present, the moving target auto tracking and location algorithm takes a long time to locate the moving target in complex environment, and there is an error between the location result and the actual location of the target. In this paper, an automatic tracking and positioning algorithm for moving targets in complex environment is proposed, which can locate the moving target quickly and accurately in complex environment.