Identification of Hessian matrix in distributed gradient-based multi-agent coordination control systems

Multi-agent coordination control usually involves a potential function that encodes information of a global control task, while the control input for individual agents is often designed by a gradient-based control law. The property of Hessian matrix associated with a potential function plays an important role in the stability analysis of equilibrium points in gradient-based coordination control systems. Therefore, the identification of Hessian matrix in gradient-based multi-agent coordination systems becomes a key step in multi-agent equilibrium analysis. However, very often the identification of Hessian matrix via the entry-wise calculation is a very tedious task and can easily introduce calculation errors. In this paper we present some general and fast approaches for the identification of Hessian matrix based on matrix differentials and calculus rules, which can easily derive a compact form of Hessian matrix for multi-agent coordination systems. We also present several examples on Hessian identification for certain typical potential functions involving edge-tension distance functions and triangular-area functions, and illustrate their applications in the context of distributed coordination and formation control.


Introduction.
1.1. Background and related literature. In recent years cooperative coordination and distributed control for networked multiple agents (e.g., autonomous vehicles or mobile robots etc.) have gained considerable attention in the control, optimization and robotics community [8,15]. This has been motivated by various applications such as formation control, coordination in complex networks, sensor networks, distributed optimization, etc. A typical approach for designing distributed control law for coordinating individual agents is to associate an objective potential function for the whole multi-agent group, while the control law for each individual agent is a gradient-descent law that minimizes the specified potential function [19,9]. Very often, such potential functions are defined by geometric quantities such as distances or areas related with agents' positions over an interaction graph in the configuration space. Typical scenarios involving gradientbased control in multi-agent coordination include distance-based formation control [16,25,26,4,10], multi-robotic maneuvering and manipulability control [14], motion coordination with constraints [32], among others. Comprehensive discussions and solutions to characterize distributed gradient control laws for multi-agent coordination control are provided in [19] and [20], which emphasize the notion of clique graph (i.e., complete subgraph) in designing potential functions and gradient-based controls. The recent book [23] provides an updated review on recent progress of cooperative coordination and distributed control of multi-agent systems.
For multi-agent coordination control in a networked environment, a key task in the control law design and system dynamics analysis is to determine convergence and stability of such gradient-based multi-agent systems with a group potential function. Gradient systems enjoy several nice convergence properties and can guarantee local convergence if certain properties such as positivity and analyticity of potential functions are satisfied. However, in order to determine stability of different equilibrium points of gradient systems, Hessian matrix of potential functions are necessary and should be identified.
For gradient systems, Hessian matrix plays an important role in determining whether an equilibrium point is stable or unstable (i.e., being a saddle point etc). Hessian also provides key information to reveal more properties (such as hyperbolicity) of an equilibrium associated with a potential function. However, identification of Hessian matrix is a non-trivial and often very tedious task, which becomes even more involved in the context of multi-agent coordination control, in that graph topology that models agents' interactions in a networked manner should also be taken into consideration in the Hessian formula. The standard way of Hessian identification usually involves entry-wise calculation, which we refer as 'direct' approach. But this approach soon becomes intractable when a multi-agent coordination system under consideration involves complicated dynamics, and the interaction graph grows in size with more complex topologies. Alternatively, matrix calculus that takes into account graph topology and coordination laws can offer a more convenient approach in identifying Hessian matrices and deriving a compact Hessian formula, and this motivates this paper.
In this paper, with the help of matrix differentials and calculus rules, we discuss Hessian identification for several typical potentials commonly-used in gradient-based multi-agent coordination control. We do not aim to provide a comprehensive study on Hessian identification for multi-agent coordination systems, but we will identify Hessian matrices for two general potentials associated with an underlying undirected graph topology. The first is an edge-based, distance-constrained potential that is defined by an edge function for a pair of agents, usually involving inter-agent distances. The overall potential is a sum of all individual potentials over all edges. The second type of potential function is defined by a three-agent subgraph, usually involving the (signed) area quantity spanned by a three-agent subgraph. We will use the formation control with signed area constraints as an example of such distributed coordination systems, and illustrate how to derive Hessian matrix for these coordination potentials in a general graph. The identification process of Hessian formula can be extended in identifying other Hessians matrices in even more general potential functions used in multi-agent coordination control.
1.2. Paper contributions and organizations. The main contributions of this paper include the following. We will first present two motivating examples with comparisons on different identification approaches, in which we favor the 'indirect' approach based on matrix calculus in the identification. For some typical multiagent potentials defined as edge-tension, distance-based functions, we will derive a general formula of Hessian matrix that can be readily applied in calculating Hessians for potential functions with particular terms. For potential functions involving both distance functions and triangular-area functions, we will show, by using two representative examples, how a compact form of Hessian matrix can be obtained by following basic matrix calculus rules. Note it is not the aim of this paper to cover all different types of potentials in multi-agent coordination and identify their Hessian formulas. Rather, apart from the identification results of several Hessians, the paper will also serve as a tutorial on Hessian identification for multi-agent coordination systems by analyzing some representative potential functions, and by following matrix calculus rules we will aim to advance this approach in Hessian identification in the context of multi-agent coordination control.
This paper is organized as follows. Section 2 reviews several essential tools of matrix/vector differentials and calculus rules that will be used in the derivation of Hessian matrix for various potential functions. Section 3 presents preliminaries on basic graph theoretic tools in modeling multi-agent distributed systems, and gradient systems for designing gradient-distributed controllers for multi-agent coordination control. Motivating examples with a two-agent system and with a threeagent system are discussed in Section 4, which presents obvious advantages of using matrix calculus rules in identifying Hessian matrix for multi-agent coordination potentials. Section 5 discusses a unified and general formula of Hessian identification for edge-tension, distance-based potentials that are commonly-used in modeling multi-agent coordination tasks. Several typical examples of edge-based potentials are also discussed in this section, with their Hessian matrices correctly identified by following the derived general formula. Section 6 shows general approaches for identifying Hessian matrix for composite potential functions that involve not only edge-based distance functions but also triangular-area-based functions within threeagent groups as complete subgraphs. Brief discussions and remarks are shown in 7 that conclude this paper.
1.3. Notations. The notations used in this paper are fairly standard. A real scalar valued function f is called a C r function if it has continuous first r derivatives. The notation 'd' denotes 'differential'. We use R n to denote the n-dimensional Euclidean space, and R m×n to denote the set of m×n real matrices. The transpose of a matrix or vector M is denoted by M . For a vector v, the symbol v denotes its Euclidean norm. We denote the n × n identity matrix as I n . A diagonal matrix obtained from an n-tuple vector {x 1 , x 2 , · · · , x n } with x i ∈ R as its diagonal entries is denoted as diag(x 1 , x 2 , · · · , x k ) ∈ R n×n , and a block diagonal matrix obtained from n-column d-dimensional vectors {x 1 , x 2 , · · · , x n } with x i ∈ R d as its diagonal block entries is denoted as blk-diag(x 1 , x 2 , · · · , x k ) ∈ R dn×n . The symbol ⊗ denotes Kronecker product.
2. Background on vector/matrix differentials. In this section we review some background on matrix calculus, in particular some fundamental rules on vector/ matrix differentials. More discussions and properties on matrix calculus can be found in [31,Chapter 3], [12,Chapter 15], and [1, Chapter 13].
Consider a real scalar function f (x) : R m → R that is differentiable with the variable x = [x 1 , . . . , x m ] ∈ R m . 1 The first-order differential (or simply differential) of the multivariate function f (x 1 , . . . , x m ) is denoted by or in a compact form ∂xm and dx := [dx 1 , · · · , dx m ] . In this way one can identify the Jacobian matrix D x f (x) := ∂f (x) ∂x ∈ R 1×m , which is a row vector. According to convention, we also denote the gradient vector as a column vector, in ∂xm ∈ R m×1 . Note the same rule can also be applied to the identification of Jacobian matrix for a real vector-valued function f (x) : R m → R n , in which the Jacobian matrix can be identified as D x f (x) := ∂f (x) ∂x ∈ R n×m . Now we consider a real scalar function f (x) ∈ C 2 : R m → R (i.e., twice differentiable functions). We denote the Hessian matrix, i.e., the second-order derivative of a real function f (x), as H f (x) , which is defined as In a compact form, we can also write Therefore, the (i, j)-th entry of H is defined as where the equality in the second line is due to the symmetry of Hessian matrix. The entry-wise definition of Hessian H f in (5) presents a standard and direct approach to identify the Hessian matrix for a real scalar function f . However, in general it is not convenient for performing the calculation in practice by following the entry-wise definition (5). We will now discuss a faster and more efficient approach for Hessian matrix identification based on matrix calculus rules.
From the compact form of first-order differential df (x) in (2), one can calculate the second-order differential as which presents a quick and convenient way to identify Hessian matrix in a compact form. Note that in the above derivation we have used the fact d(dx ) = 0 because dx is not a function of the vector x. In this paper, we will frequently use (6) to identify Hessian matrices for several typical potential functions applied in multiagent coordination control.
3. Preliminaries on graph theory and gradient systems.
3.1. Basic graph theoretic tools and applications in modeling multi-agent systems. Interactions in multi-agent coordination systems are usually modeled by graphs, for which we review several graph theoretic tools in this section. Consider an undirected graph with m edges and n vertices, denoted by G = (V, E) with vertex set V = {1, 2, · · · , n} and edge set E ⊂ V × V. Each vertex represents an agent, and the edge set represents communication or interaction relationship between different agents. The neighbor set N i of vertex i is defined as N i := {j ∈ V : (i, j) ∈ E}. The matrix relating the vertices to the edges is called the incidence matrix H = {h ij } ∈ R m×n , whose entries are defined as (with arbitrary edge orientations) the i-th edge leaves vertex j; 0, otherwise.
Another important matrix representation of a graph G is the Laplacian matrix L(G) [17]. For an undirected graph, the associated Laplacian matrix can be written as L(G) = H H. For more introductions on algebraic graph theory and their applications in distributed multi-agent systems and networked coordination control, we refer the readers to [17] and [7]. Let p i ∈ R d denote a point that is assigned to agent i ∈ V in the d-dimensional Euclidean space R d . The stacked vector p = [p 1 , p 2 , · · · , p n ] ∈ R dn represents a configuration of G realized in R d . Following the definition of the matrix H, one can construct the relative position vector as an image of H ⊗ I d from the position vector p: where z = [z 1 , z 2 , · · · , z m ] ∈ R dm , with z k ∈ R d being the relative position vector for the vertex pair (i, j) defined for the k-th edge: z k = p i − p j . In this paper we may also use notations such as z kij or z ij if no confusion arises.

3.2.
Gradient systems and gradient-based multi-agent coordination control. In this section we briefly review the definition and properties of gradient systems. Let V (x) : R n → R ≥0 be a scalar valued function that is C r with r ≥ 2. Consider the following continuous-time systeṁ The above system is usually called a gradient system, and the corresponding function V (x) is referred to as a potential function. Gradient system enjoys several convergence properties due to the special structure of the gradient vector field in the right-hand side of (9). Firstly, it should be clear that equilibrium points of (9) are critical points of V (x). Moreover, at any point except for an equilibrium point, the vector field (9) is perpendicular to the level sets of V (x). In fact, it is obvious to observe thatV ( is always non-increasing along the trajectory of (9). The following results are also obvious.
Theorem 3.1. Consider the gradient system (9) with the associated potential V (x).
•V (x) ≤ 0 andV (x) = 0 if and only if x is an equilibrium point of (9).
• Supposex is an isolated minimum of a real analytic V (x), i.e., there is a neighborhood ofx that contains no other minima of V (x). Thenx is an asymptotically stable equilibrium point of (9).
The proof of the above facts can be found in e.g. [29,Chapter 15]. Note that in the second statement we have emphasized the condition isolated minimum in the convergence property. We also refer the readers to the book [29, Chapter 15] for more introductions and properties on gradient vector fields and gradient systems.
Note that a local minimum of V is not necessarily a stable equilibrium point of (9), unless some more properties on the potential V are imposed (while the smoothness of the potential V is not enough). In [2], several examples (and counterexamples) are carefully constructed to show the relationship between local minima of V and stable equilibrium points of (9). In particular, it is shown in [2] that with the analyticity 2 of the potential V , local minimality becomes a necessary and sufficient condition for stability. Theorem 3]) Let V be real analytic in a neighborhood of an equilibriumx ∈ R n . Then,x is a stable equilibrium point of (9) if and only if it is a local minimum of V .
In order to determine convergence and stability properties for general equilibrium points for a gradient system (9), one needs to further analyze the linearization matrix of (9) (i.e., the Hessian matrix of V , with a reverse sign). Therefore, identification of Hessian matrix is a key step prior to analyzing equilibrium and convergence properties of gradient systems.
In the context of multi-agent coordination control, gradient systems and gradientbased control provide a natural solution to coordination controller design. Very often, group objective functions for a multi-agent system serve as a potential function, and control input for each agent typically involves a gradient-descent control that aims to minimize a specified potential function. A key question is whether the gradient control input for each agent is local and distributed, in the sense that control input only involves information (or relative information) of an agent itself and its neighbors as described by the underlying network graph that models interactions between individual agents. This question is addressed in [19], from which we recall some key definitions and results as follows. The following definition refers to a fundamental property of objective potential functions whose gradient-based controllers (9) are distributed. Definition 3.3. A class C 1 function f i is called gradient-distributed over the graph G if and only if its gradient-based controllers (9) are distributed; that is, there exist n functions f i such that The recent papers [19] and [20] provide a comprehensive study on gradient-based distributed control, in which a full characterization of the class of all gradientdistributed objective potential functions is discussed. A key result in [19] and [20] is that the notion of clique (i.e., complete subgraph) plays a crucial role to obtain a distributed controller for multi-agent coordination control. That is, in order for a gradient-based coordination control to be distributed, the objective potential function should involve only agents' states in a clique. Typical cliques include edges associated with two agents, triangular subgraphs associated with three agents, etc. In this paper, our focus will be on the Hessian analysis of a distributed gradientbased coordination control system (10) associated with an overall potential function, with the aim of providing some unified formulas of Hessian matrix. The identification of Hessian formulas will aid the stability analysis of different equilibriums in gradient-distributed multi-agent systems.

4.
Motivating examples: Hessian matrix identification for simple gradientbased coordination systems.

4.1.
Hessian identification for a two-agent coordination system. As a motivating example, we provide a general approach to identify Hessians for simple gradient-based control systems that involve two or three agents (examples taken from [21]). Consider a multi-agent system that consists of two agents i and j in a 2-D space, with p i ∈ R 2 being fixed and p j ∈ R 2 governed bẏ where in which d ij is a positive value denoting a desired distance between agents i and j. The gradient vector is Now we identify the Hessian matrix by following the matrix calculus rule in (6): Note that Therefore, and from (14) one has which readily shows the expression of Hessian matrix. We summarize: Lemma 4.1. The Hessian matrix for the potential (12) with the gradient system (11) is identified as Remark 1. If one assumes p j = [0, 0] and denotes

then the above Hessian (18) is reduced to
The Hessian formula (19) has been discussed in [21] for stability analysis of a two-agent distance-based coordination control system. The derivation of Hessian (19) in [21] is based on entry-wise identifications, which is in general not convenient as compared with the above derivation using matrix calculus rules.

4.2.
Hessian identification for a three-agent coordination system. As a further motivating example, we consider a three-agent coordination problem from [21], in which the potential function includes both distance-based potentials and an area-based potential. The overall potential function is defined as where K is a positive scalar gain and with J = [0, 1; −1, 0] defines the signed area of the triangle associated with three agents (i, j, k). For notational convenience we denote V ijk = V d + V S , with V d defined as the first two quadratic functions and V S the third quadratic function in (20). Note that the third quadratic function V S in (20) with S terms serves as a signed area constraint that involves positions of a three-agent group, which makes it different to the edge potential function (12) that only involves two agents. In this example, by following the same problem setting as in [21], we again assume that agents i and j are fixed and stationary, and agent k's dynamics are governed by a gradient descent control laẇ where we have used the fact that which implies ∂S ∂p k = 1 2 J(p i − p j ). Now we identify the Hessian matrix H V , which holds H V = H V d + H V S , for the gradient flow (22) associated with the potential (20). By following similar steps as in Section 4.1, one obtains There also holds which implies We summarize the Hessian identification result in the following: The Hessian formula (28) has been discussed in [21] for stability analysis of a three-agent formation control system with both distance and area constraints. As can be seen above, if the Hessian is calculated via the entry-wise approach, it is often tedious to get the right formula.
The two examples presented in this section motivate the Hessian identification approach via matrix differentials and calculus rules. In the following sections, we will show how to derive general formulas for Hessian matrices for some typical potential functions in multi-agent coordination control.

5.
Hessian identification for edge-tension, distance-based potentials. In this section we consider some typical potential functions in multi-agent coordination control, which are defined as edge-tension, distance-based functions, for modeling multi-agent systems in a general undirected graph.
Consider a local edge-tension potential in the form V ij (p i , p j ) associated with edge (i, j) that involves p i ∈ R d and p j ∈ R d . If (i, j) / ∈ E, we suppose V ij = 0. Furthermore, for the symmetry of coordination systems interacted in an undirected graph, we also assume that V ij (p i , p j ) = V ji (p i , p j ). The overall potential for the whole multi-agent group is a summation of local potentials over all edges, constructed by The coefficient 1 2 in the overall potential (29) is due to the fact that each local potential V ij is counted twice in the underlying undirected graph.
In this section we consider a general potential function as a function of inter-agent distances p i − p j , defined as Such a distance-based potential function has found many applications in distributed multi-agent coordination control and has been one of the most popular functions in developing coordination potentials. Typical applications of the potentials (30) and (29) include multi-agent consensus [18], distance-based formation control [16,22,6], formation control laws with collision avoidance [5,3], multi-robotic navigation control [11], multi-agent manipulability control [14], and connectivity-preserving control [13,30], among others.

5.1.
Derivation of a general Hessian formula. The control input for agent i is a gradient-descent control Note that Therefore, where we have used the definition V ij := ∂Vij ( pi−pj ) ∂ pi−pj .

IDENTIFICATION OF HESSIAN MATRIX 307
Now an explicit form of the gradient-based control (31) is derived aṡ The distributed coordination system in (34) may be seen as a weighted multiagent consensus dynamics, in which the weights, defined as ω kij := V ij 1 pi−pj := V k 1 z k , are dependent on states (i.e., state-dependent weights); however, the control objective is encoded by the potential V ij and its gradient that encompasses many coordination tasks, while consensus or state agreement is only a special case. A compact form for the overall coordination system is derived aṡ where Following the matrix calculus rules in (6) one can show where dW = diag(dω 1 , dω 2 , · · · , dω m ).
Recall that We can obtain a nice formula for the term (H dW H ⊗ I d p as follows. Note that Now by defining Z := blk-diag(z 1 , z 2 , · · · , z m ) ∈ R md×m (41) one obtains (dW H ⊗ I d ) p = Zdω from (40). We then analyze the term dω. One can actually show Therefore, in a compact form, one can obtain dω = ΩZ (H ⊗ I d )dp,

ZHIYONG SUN AND TOSHIHARU SUGIE
where Now (37) becomes d 2 V = (dp) d∇ p V = (dp) (H ⊗ I d )ZΩZ (H ⊗ I d ) dp + (dp) H W H ⊗ I d dp Then according to the basic formula (6), the Hessian matrix is obtained as the matrix in the middle of (45).
In short, we now summarize the main result on Hessian identification for edgetension, gradient-based distributed systems as follows.
Theorem 5.1. For the edge-tension distance-based potential function (29) and the associated gradient-based multi-agent system (31), the Hessian matrix is identified as where H is the incidence matrix for the underlying graph, Z = blk-diag(z 1 , z 2 , · · · , z m ), and the diagonal matrices Ω and W are defined in (36) and (44), respectively. (46), one can also easily show the entry-wise expression of the Hessian. To be specific, the (i, i)-th block of the Hessian H V (29) is expressed as

Remark 3. From the general and compact formula of Hessian matrix in
and the (i, j)-th block is expressed by where

5.2.
Examples on Hessian identification for certain typical potential functions. In this subsection, we show several examples on Hessian identification for some commonly-used potential functions in coordination control. These potential functions have been extensively used in designing gradient-based coordination laws in the literature; however, litter study was reported on their Hessian formulas. We will see how to use the general formula (46) in Fact 5.1 to derive a compact form of Hessian matrix for each potential function.
Example 1. Distance-based multi-agent formation control, discussed in e.g., [16]. The edge-tension distance-based potential is Then ω k = p i − p j 2 − d 2 ij , and W := diag (ω 1 , ω 2 , · · · , ω m ). It is clear that ω k = ∂ω k ( z k )/∂ z k = 2 z k and therefore Ω = diag(2, 2, · · · , 2) = 2I d . Thus, the Hessian matrix in this case is identified as In the context of distance-based formation control, the matrix Z (H ⊗ I d ) is the distance rigidity matrix associated with the formation graph, denoted by R.
The Hessian in the form has been calculated (with different approaches) in e.g., [22,6,24] for formation systems with special shapes, e.g., 3-agent triangular shape or 4-agent rectangular shape.
Example 2. Formation control law with edge-based distance constraints, discussed in e.g., [5,3,11]. The edge-based potential is Thus, one can obtain the diagonal matrix W as follows It is clear that ω k = ∂ω k ( z k )/∂ z k = d k z k 2 and therefore Thus, the Hessian matrix in this case is identified as with W and Ω defined in (53) and (54), respectively.
Example 3. Leader-follower manipulability control, discussed in e.g., [14]. The edge potential is a function in the form where e ij is a strictly increasing, twice differentiable function. Now we identify the Hessian matrix by following the above result in Fact 5.1. Note that ∂V ij /∂ p i − p j = e ij e ij where e ij = ∂e ij /∂( p i − p j ). Therefore ω k = e k e k / z k and It is clear that Therefore, the entries of the diagonal matrix Ω is ω k / z k = (e k e k + e k e k ) z k − e k e k ) z k 3 .

ZHIYONG SUN AND TOSHIHARU SUGIE
The Hessian matrix for the potential is identified as with W defined in (57) and Ω defined with diagonal entries of ω k / z k as calculated above.
Example 4. Connectedness-preserving control in multi-agent coordination, discussed in e.g., [13,30]. The edge-based potential function takes the following form where δ is a positive parameter. Note that and therefore Furthermore, one can show Therefore, the diagonal matrix Ω can be obtained as The Hessian matrix for the overall potential V = 1 2 n i n j V ij is identified as with W and Ω defined in (63) and (65), respectively.

5.3.
A further example of Hessian identification for edge-based coordination potentials. In this subsection, as a further example, we will show an alternative approach for identifying Hessian formula for an edge-based coordination potential.
Consider the overall potential function where z k = p i − p j is the relative position vector for edge k. The potential function (67) has been discussed in e.g., [27] for multi-agent formation and coordination control with collision avoidance and the gradient function for agent i is obtained as We may use ρ kij and ρ k , z k and z kij interchangeably in the following text. Define a vector ρ = [ρ 1 , ρ 2 , · · · , ρ m ] and a block diagonal matrix Z in the form Z = blk-diag(z 1 , z 2 , · · · , z m ). Then one can obtain a compact form of the gradient where diag(ρ k ) denotes a diagonal matrix with its k-th diagonal entry being ρ k . As we will show later, the second line of the above equality will be particularly useful for later calculations. We follow the matrix calculus rule to obtain a compact form of the Hessian. From the basic formula (6), one can obtain First note that We then calculate the term dρ. To this end, we define α k = z k 2 and α = [α 1 , α 2 , · · · , α m ] . It is obvious that Note that there holds Therefore one can obtain From the above derivations one can further rewrite (70) as in which a compact formula of the Hessian matrix is derived. In short, we summarize the above result in the following lemma.

ZHIYONG SUN AND TOSHIHARU SUGIE
Lemma 5.2. For the potential function (67) in multi-agent coordination, the Hessian formula is identified as which can be equivalently written as A brief calculation of the above Hessian matrix is shown in the appendix in [28]. The formula can also be calculated from the general formula in Fact 5.1, while the above calculation provides an alternative way to obtain a compact form of the Hessian matrix.
6. Hessian matrix identification for composite potential functions. In this section we consider more complex potential functions that involve both distancebased functions and area-based functions (which are thus termed composite potentials).
These composite functions are examples of clique-based potentials (with edgebased functions being the simplest case, and triangular-area functions being the second simplest case). Since a general and compact form of of Hessian formula for clique-based potentials is generally intractable, we will discuss in this section two examples of Hessian derivation for composite potentials with both distance and area functions, while clique graphs specialize to 2-agent edge subgraph and 3-agent triangle subgraph. These examples of such potential functions are taken from [4]. Nevertheless, the derivation of Hessian formula to be discussed in this section will be helpful in identifying Hessians for more general potentials for other clique-based graphs.
6.1. Identification example I: 3-agent coordination system with both distance and area functions. Consider a 3-agent coordination system with the following potential that includes two terms incorporating both distance and area constraints: where d ij is the desired distance between agents i and j, and defines the signed area for the associated three agents. By denoting V 1 = 1 4 e e, with e = [e 1 , e 2 , e 3 ] and e k = z k 2 − d 2 k for k = 1, 2, 3 corresponding to the three edges, and V 2 = 1 2 K(S − S * ) 2 , we rewrite V = V 1 + V 2 . Therefore, the Hessian will have two parts H V = H V1 +H V2 . According to Example 1 in Section 5.2, the first part of Hessian matrix H V1 is readily identified as where R is the 3×6 rigidity matrix associated with the underlying undirected graph, and W = diag{e 1 , e 2 , e 3 }.

ZHIYONG SUN AND TOSHIHARU SUGIE
Therefore, one can rewrite (82) as is the Hessian matrix H V2 . We summarize the above result and calculations in the following:  (79) and (87), respectively.
Write the performance index V as a sum V 1 + V 2 , where V 1 contains the distance error terms e ij and V 2 contains the area error terms S A , S B . Again, according to Example 1 in Section 5.2, the first part of Hessian matrix H V1 can be computed in a similar way, and is given by where R ∈ R 5×8 is the rigidity matrix associated with the underlying graph (the edge orientations having immaterial effect on the Hessian), We now identify the Hessian for the second part of potential function V 2 . Note that in which one can show and dS B = − 1 2 0 (p 3 − p 4 ) J (−p 2 + p 4 ) J (p 2 − p 3 ) J     dp 1 dp 2 dp 3 dp 4     .
The above calculation immediately shows the formula of the Hessian matrix. We now summarize: Theorem 6.2. The Hessian matrix associated with the potential function is identified as H V = H V1 + H V2 with H V1 given in (89) and where Y A and Y B are defined in (93) and (96), respectively.
The Hessian formula was discussed and used in [4] but details were not shown. A conventional way with entry-wise calculation will soon make the identification process intractable. We remark that, by following the two examples illustrated in this section, one can readily identify Hessians for more general composite potentials modelled in a general undirected graph. 7. Discussions and conclusions. In this paper we present fast and convenient approaches for identifying Hessian matrix for several typical potentials in distributed multi-agent coordination control. We have advanced the 'indirect' approach in the Hessian identification based on matrix differential and calculus rules, as opposed to the direct approach with entry-wise calculation. Many distributed coordination laws involve an overall potential as a summation of local distance-based potentials over all edges. For such edge-tension distance-based potentials, We derive a general formula for the Hessian matrix, with which Hessian formulas for several commonlyused coordination potentials can be readily derived as special cases. We also analyze the case of composite potentials with both distance and triangular-area functions, associated with a pair of three agents (as opposed to edge-tension potentials with two agents); two examples of Hessian identification for such potentials are discussed in detail. The advantage of using 'indirect' matrix calculus approach shows its benefit as a fast and tractable identification process. The results in this paper can be a guidance in Hessian identification for other types of potentials in multi-agent coordination control.