OPTIMUM SENSOR PLACEMENT FOR LOCALIZATION OF A HAZARDOUS SOURCE UNDER LOG NORMAL SHADOWING

. We consider the problem of optimum sensor placement for localizing a hazardous source located inside an N -dimensional hypersphere centered at the origin with a known radius r 1 . All one knows about the probability density function (pdf) of the source location is that it is spherically symmetric, i.e. it is a function only of the distance from the center. The sensors must be placed at a safe distance of at least r 2 > r 1 from the center, to avoid damage. Localization must be eﬀected from the strength of a signal emanating from the source, as received by a set of sensors that do not lie on an ( N − 1) − dimensional hyperplane. Under the assumption that this signal strength experiences log normal shadowing, we characterize non-coplanar sensor positions that optimize three distinguished parameters associated with the underlying Fisher Information Matrix (FIM): maximizing its smallest eigenvalue, maximizing its determinant, and minimizing the trace of its inverse. We show that all three have the same set of optimizing solutions and involve placing the sensors on the surface of the hypersphere of radius r 2 . As spherical symmetry of the pdf precludes uniqueness we provide certain canonical optimizing solutions where the i -th sensor position x i = Q i − 1 x 1 , with Q an orthogonal matrix. We provide necessary and suﬃcient conditions on Q and x 1 for x i to be non-coplanar and optimizing. In addition, we provide a geometrical interpretation of these solutions. We observe the N -dimensional solutions for N > 3 have implications for optimal design of sensing matrices in certain compressed sensing problems.


1.
Introduction. In the recent past, research in location based services has gained momentum with virtually every device being a smart location aware device. Though GPS offers good location estimates when there is Line of Sight (LOS) communication with the GPS satellites, in indoor and dense urban environments with obstacles and multipath effects, LOS is often weak or missing. In such cases wireless localization using sensor networks can perform cost effective and accurate source localization. Specifically in applications where the source poses threat to life, wireless sensor technology becomes almost indispensable. The references [17], [20], [39], [33], [8] and [26], provide several applications of wireless source localization. Applications include location based advertising, disaster management, indoor navigation and positioning, packet routing in mobile networks, chemical and biological source tracking, ubiquitous computing, wearable body area networks, habitat monitoring, animal tracking etc. to list a few. Besides these commercial and security based applications, future wireless sensor technology promises easier and safer navigation to people with disabilities like vision impairments [31].
In the context of source localization using wireless sensor networks, groups of sensors receive signals from the source and wirelessly communicate their relative position information either in a centralized or distributed fashion to spot the source [11], [27], [29]- [42]. Three fundamental questions that arise in wireless source localization using sensor technology are (a) In what form do we acquire the location information of the source? (b) Where do we place the sensors to make the best guess of the source location? (c) How do we measure the accuracy of the location estimate?
Sensor geometry plays a significant role in determining the accuracy of the source location estimate [38], [5], bringing us to the subject of the paper. We assume that a hazardous source is located in the ball of radius r 1 , centered at the origin, B(r 1 ) = a ∈ R N a ≤ r 1 }. (1) All norms are 2-norms. We further assume that the source position y ∈ R N has a radially symmetric probability density function (pdf). In particular, the pdf of y is given by, [7], f Y (y) = g( y ) for y ∈ B(r 1 ) 0 else (2) We further assume that there are n > N sensors located at positions x i ∈ R N , which must localize this source by measuring the received signal strength (RSS) of a signal emanating from it. We assume that the RSS experiences log-normal shadowing. The model defined in Section 2 has two parameters, A and β, which we assume are known to the sensors. We also require that the hazardous nature of the source mandates that sensors keep away from the source, i.e. obey for some r 2 , Then our goal is to optimally place the sensors so that they do not lie on an (N − 1)-dimensional hyperplane, obey (3), and optimize the Fisher Information Matrix (FIM) of this localization problem in one of the three ways below: 1. A-optimality which requires minimization of the trace of the inverse of the FIM or 2. D-optimality which requires maximization of the determinant of the underlying FIM or 3. E-optimality which requires maximization of the minimum eigenvalue of the underlying FIM [36], [23], [28].
Non-coplanarity is motivated by a practical and a technical reason: Practical, in the sense that it is necessary for unique localization; technical because otherwise there may be instances when the FIM may be singular. We show that the solution to all three problems is identical, is independent of the parameters A and β and f Y (y), and mandates the sensors to be placed on the surface of the hypersphere of radius r 2 , in a manner that makes the FIM a scaled identity.
As radial symmetry of the pdf f Y (y) precludes uniqueness of the optimum solution, we provide certain interesting canonical solutions. Specifically, these are of the form x i = Q i−1 x 1 where Q is an orthogonal matrix. We provide necessary and sufficient conditions on the [Q, x 1 ] pairs for them to generate optimum sensor positions, and provide geometric interpretations in two and three dimensions. For example one configuration in R 3 involves placing sensors at the circumference of the bases of two cones whose apex points are at the origin, and their bases are parallel to the x-y plane and with circumference on the sphere of radius r 2 . If x i is on the top base then x i+1 is on the bottom with an angular separation of 2π/n between them, n being the number of sensors.
1.1. Related Work. There are two categories of work in this area. The first set of papers take a very different perspective than we do. The second set adopts a comparable approach. In the first set is [6] which proposes an optimum camera placement strategy that maximizes the observability of motion in a specified region under surveillance. On the other hand, [35] proposes placement for triangulation based localization. The authors use integer linear programming to estimate the source location using bearing only measurements and focus on finding a minimum number and optimal placement for sensors given a threshold on the tolerable uncertainty. Closer to our approach, but still different are [19], [1] and [25], which use the FIM to find optimum sensor placement. Each of these papers focuses on a particular constraint. While [1] focuses on optimally placing the sensors in an array, [25] assigns weights to source positions in the grid based on the importance of the position and [19] discretizes the search space.
The second category of papers include, [5], [24] and [18]. All three are restricted to 2-dimensions. Further they assume that measurements of distance or Time Difference of Arrival (TDOA) are Gaussian, an approximation that only holds at high SNRs. Further the implicit assumption on the pdf of y are only special cases of (1), e.g. truncated Gaussian. Similarly our conference papers [13], [15], are in 2-dimensions, and assume the pdf f Y (y) = δ(y) and uniform respectively. To the best of our knowledge, the only papers dealing with three dimensions are [14] and [2], but are again restricted to the pdfs f Y (y) = δ(y) and uniform respectively. It turns out that they exploit a special property of the FIM that holds for these two pdfs but not for the general radially symmetric pdf considered here.
What is more [2] lacks proof. Moreover all these references, [13], [15], [14] and [2] only give sufficient conditions for the [Q, x 1 ] pairs defining the canonical solutions to be optimizing while not residing on an (N − 1)-dimensional hyperplane. Finally we note that though the N > 3 cases are not directly relevant to localization, as argued in [3], the f Y (y) = δ(y) case impacts the design of optimal sensing matrices for compressed sensing. Section 2 has the problem formulation. Section 3 derives some properties of the FIM. Section 4 characterizes the solution to the three problems without taking the non-coplanarity requirement into account. Section 5 presents the canonical solutions which respect the non-coplanarity requirement. Simulations are in section 6.
2. Problem formulation. In this section, we describe the received signal strength (RSS) model that underlies this paper, derive the FIM and give a precise problem formulation. We assume that the RSS experiences log-normal shadowing, [32]. In particular, with mutually uncorrelated w i ∼ N (0, σ 2 ), β a path loss parameter and A the RSS at a unit distance, there holds: In the noise free case, this is tantamount to the RSS model: In broad terms our goal is to select x i so that the s i in (4), provide an optimum estimate of y, subject to (1)- (3). As the source itself is random, a common optimality criterion is maximization, in an appropriate sense of the Expectation of the Fisher Information Matrix (FIM), which in the random parameter estimation case serves as the appropriate FIM, and will be referred to as such in the sequel. As is well known, [36], for random parameter estimation, the inverse of the expectation of the FIM provides the Cramer-Rao Lower Bound (CRLB) matrix. Additionally, to assure unique localization, we also demand that the x i not lie on an N − 1 dimensional hyperplane. For example, collinear sensors for N = 2, and coplanar sensors for N = 3, can only localize to within a flip ambiguity. This requirement also implies that n > N .
As shown in the Appendix, for a nonrandom y the FIM for this problem is by Consequently, we must "maximize" The selection of the sensor locations x i for maximizing (7) is thus independent of the parameters β and σ, though the derivation of the FIM assumes that they are known. What exactly does the maximization of F mean? As noted earlier, the standard criteria are what we call E, D and T optimality. These respectively, refer to the maximization of the minimum eigenvalue of F , i.e. λ min (F ), the maximization of det(F ) or the minimization of trace F −1 . The last in particular, is equivalent to minimizing the total mean square localization error, [36]. We show later that the solutions to all three problems are identical. Formally, we wish to solve the following three problems: Problem 1: For given integers n ≥ N + 1, N > 1, and y ∈ R N with pdf as in (2), find distinct x i ∈ R N for i ∈ {1, 2....n}, that do not lie on an N − 1 dimensional hyperplane, such that λ min (F ) is maximized subject to (3).
Problem 2: For given integers n ≥ N + 1, N > 1, and y ∈ R N with pdf as in (2), find distinct x i ∈ R N for i ∈ {1, 2....n}, that do not lie on an N − 1 dimensional hyperplane, such that det(F ) is maximized subject to (3).
Problem 3: For given integers n ≥ N + 1, N > 1, and y ∈ R N with pdf as in (2), find distinct x i ∈ R N for i ∈ {1, 2....n}, that do not lie on an N − 1 dimensional hyperplane, such that trace F −1 is minimized subject to (3).
We observe an added virtue of the non-coplanarity of the x i . It ensures that (6) is nonsingular for all y.
3. Properties of F . In this section, we examine some properties of (7). We first define a single summand in (7), namely, Of course with this F in (7) becomes: The first Lemma discusses how H(x) changes under an orthogonal transformation. Its importance comes from the fact that two x i that have the same norm, are related through an orthogonal transformation. (8), y distributed with pdf as in (2). Then with P ∈ R N ×N an orthogonal matrix, there holds Proof. Consider z = P y. As det(P ) = ±1, and z = y , because of (2), there holds, Thus, We will next characterize the eigenvalues of H(x). To this end we present the following lemma where e i ∈ R N denotes the vector whose i-th element is one and the rest zero. Lemma 3.2. Consider y ∈ R N with pdf as in (2) and R > r 1 . Then for every k ∈ {1, · · · , N }, H(Re k ) is a positive semidefinite diagonal matrix. Further its k-th diagonal element is distinct from the others, while the rest equal each other.
Proof. Because of the radial symmetry of (2) it suffices to prove the result for k = 1.
Denote y i to be the i-th element of y. Then for i ∈ {2, · · · , n} If we fix all elements of y except y i , then is an odd function of y i , while g( y ) is an even function. Thus, Similarly, for the same i and any l = i, Now for any i = 1 The (1, i)-th element of H(Re 1 ) is given by: Similarly, for i ∈ {2, · · · , n} and m ∈ {2, · · · , n} \ {i}, Thus H(Re 1 ) is diagonal. The rest of the theorem follows from the fact for all i ∈ {2, · · · , n}: for all i, m ∈ {2, · · · , n} and that equality in (18) holds iff (11) holds.
As f Y (y) is radially symmetric, (11) implies that the source is almost surely at the origin. This is essentially the setting considered in [13] and [15]. Thus unless there is no uncertainty in the location of the source H(x) is positive definite. As all vectors of the same norm are related by orthogonal transformation, using Lemmas 3.1 and 3.2, we obtain the following Theorem. (2). Then for all x > r 1 , H(x) in (8) is positive semidefinite, with N − 1 eigenvalues that are equal to each other and another that is distinct from them and is positive. Further, H(x) is positive definite unless (11) holds.
In the sequel trace(H(x)) will play a crtical role. Evidently, because of Lemma 3.1 it depends only on x . Thus we introduce the notation: This brings us to a key difference between the development here and that in [14] and [2]. Those papers relied on the fact that in those settings for all r 1 < R 1 < R 2 . For general radially symmetric distributions this is in general false. Suppose in particular N = 3, and y is uniformly distributed on the surface of the sphere of radius r 1 , i.e. in (2), Then some tedious calculations show that d [H(Re 3 )] 33 dR is in fact positive when R > r 1 is sufficiently close to r 1 . As [H(Re 3 )] 33 is an eigenvalue of H(Re 3 ), this means that (21) will not hold. To accommodate this difference, in a significant technical departure from [14] and [2] we will show that what is instead sufficient for the development here is that for all R > r 1 , T (R) in (20) is a decreasing function of R. This is proved below.
Proof. As all vectors of the same norm are related by orthogonal transformations, for all R > r 1 The result follows from the fact that whenever y ≤ r 1 < R 1 < R 2 4. Characterizing Optimality. In this section we characterize x i that achieve optimality for all three of the Problems 1 to 3, without enforcing the non-coplanarity condition. This requires the following lemma.
Further, the equality holds in each of (23-25) iff Proof. First observe that for all i ∈ {1, · · · , N }, every eigenvalue λ i (A) is positive. Then (23) follows from the fact that the trace is the sum of the eigenvalues. Further equality requires that for all i ∈ {1, · · · , N } As A is symmetric this happens iff (26) holds. The AM-GM inequality states that the arithmetic mean of N numbers is greater than or equal to its geometric mean, with equality iff all the N numbers are equal. Thus (24) follows from the fact that: The fact that the equality in the above equation holds iff (26) holds follows as a consequence of the fact that this equality is equivalent to the equality of the eigenvalues of the matrix.
Finally, also from the AM-GM inequality The result follows from the fact that equality in (28) holds iff (27) holds and that under it Thus we have the promised characterization of optimality.
Theorem 4.2. The vectors x i ∈ R N , maximize λ min (F ) iff for all i ∈ {1, · · · , n}, x i = r 2 and lead to The same is true for the x i ∈ R N that maximize det(F ) and minimize trace F −1 .
Proof. The fact that such x i exist is proved in Section 5. The fact they optimize follows from Lemma 4.1, which requires that the trace of F be as large as possible, and the fact that because of Theorem 3.4 and (3) with equality iff each x i has norm r 2 .
Thus the optimum occurs when each sensor resides on the surface of the sphere of radius r 2 . Observe, that the optimizing solution cannot be unique. For example, because of Lemma 3.1, H(−x i ) = H(x i ). Thus flipping the sign of any sensor location does not alter F . Similarly, suppose a set of x i optimize. Then replacing each by P x i , will also result in an F that equals (29). Further suppose, n = N + 1 sensors achieve optimality. Then optimality with 2N + 2 sensors can be achieved with an infinite number of combinations. For example, one can choose the first N + 1 to lie at x i and for arbitrary orthogonal P , the remaining at P x i .
Because of this, rather than proposing an exhaustive list of optimal configurations, in Section 5 we propose a set of canonical solutions. 5. Canonical Solutions. We have proved in Section 4 that the solution to all three problems involves x i = r 2 , and the satisfaction of (29). We have also argued that the solutions are nonunique in potentially nontrivial ways. In this section we provide a class of canonical solutions, and in the process prove the existence of x i that do not lie on an N −1 dimensional hyperplane and satisfy (29). We also expose certain salient differences between the N = 2 and N > 2 cases, as well as between the cases of even and odd N.
Since a necessary condition for optimality is that all x i have norm r 2 , and all such vectors are mutually related by orthogonal transformations, the canonical solutions we propose take the following form: For an orthogonal matrix Q ∈ R N ×N , the x i ∈ R N obey: In view of Theorem 4.2, our goal is to characterize all orthogonal Q and x 1 pairs such that under (30), the x i do not lie on an N − 1 dimensional hyperplane and result in (29). The first requirement necessitates that n > N .
Suppose under (30), (29) holds i.e.: We make the following definition. We now explore how Q changes with x 1 . Consider a z 1 ∈ R N , with z 1 = r 2 . Then there exists an orthogonal matrix P such that Then under (31), because of Lemma 3.1 Thus the pair [Q, x 1 ] optimizes iff so does [P QP , P x 1 ]. This brings us to a distinction between N = 2 and N = 3 or for that matter any N > 2. For N = 2 all orthogonal matrices belong to one of two categories. The first known as Givens Rotations have the form for some θ ∈ R and Q 2 (θ) = cos θ sin θ sin θ − cos θ .
Among these (33) is a rotation matrix in that its determinant is one. In fact it rotates every vector counterclockwise by the angle θ. On the other hand, (34), having a determinant −1, is not a rotation matrix. More compellingly Q 2 2 (θ) = I.
Thus with Q = Q 2 (θ) the set in (30) comprises precisely two elements, rendering the x i collinear. Thus henceforth for N = 2 only Givens Rotations will be considered, as these are necessary for the sensors to be non-collinear under (30). This fact is not established in [13] and [15]. Observe: i.e. Givens Rotations commute.
Further, given any pair z 1 , x 1 of the same norm, there exist θ i ∈ R such that Should Q be a Givens Rotation then select P = Q 1 (θ 1 ) in (32); and observe that: In other words, for N = 2, if a Givens Rotation Q leads to (31) with a particular x 1 , then the same Q also works with arbitrary z 1 of the same norm as x 1 . The commutatitivity in (35) does not extend to N > 2. Thus for N > 2 changing x 1 will require changing Q.

Avoiding Coplanarity.
A second point of departure between N = 2 and N > 2 comes from the fact that any three distinct points on a circle centered at the origin are necessarily non-collinear. For N > 2, N + 1 distinct points on a hypersphere centered at the origin, can lie on an N −1 dimensional hyperplane. The requirement of avoiding an N −1 dimensional hyperplane imposes certain conditions on Q, which we now characterize in the theorem below. As background to this theorem we observe that a real orthogonal matrix Q is normal as Q Q = QQ = I, [21]. Thus it is unitarily diagonalizable, i.e for some unitary U ∈ C N ×N , with U H U = I, and Ω ∈ C N ×N a diagonal matrix comprising the eigenvalues of Q with Ω = diag {ω 1 , · · · , ω N } = {e jθ1 , · · · , e jθ N }, θ i ∈ R.
Since the eigenvalues of Q are on the unit circle, and complex eigenvalues appear in conjugate pairs, for even N this means that the eigenvalues of Q must be distinct, complex and of the form e ±jθi . For N = 2 this necessitates the use of Given Rotations as the eigenvalues of (34) are ±1 regardless of θ. On the other hand for odd N , N 2 eigenvalues are complex and of the form e ±jθi . The remaining eigenvalue must be at −1. Thus for odd N , det(Q) = −1, preventing Q from being a rotation matrix. This also brings into sharp relief a contrast between N = 2 and N = 3. While for the former admissible Q matrices are rotation matrices, for N = 3 they cannot be.
Only the necessity of (i) and (ii) was noted in [2], making the argument there incomplete. The role of (iii) is quite crucial. Thus, consider

5.2.
Achieving (31). We now turn to the remaining question on designing [Q, x 1 ] pairs that ensure (31). Suppose W ∈ R N ×N is an orthogonal matrix for which Then because of Lemma 3.1 and Lemma 3.2 one obtains where for λ 1 > 0, λ 2 ≥ 0 and λ 1 = λ 2 , Define (36). Then (31) is equivalent to: Observe (iii) of Theorem 5.2 is equivalent to the requirement that all elements of be nonzero. In other words the first column of T is nonzero. It will be evident in the sequel that to satisfy (31) it is necessary for all diagonal elements of T ΛT H to be the same. We now show that this requirement is equivalent to the requirement that all elements in the first column of T are equal. As T is unitary this automatically means that all elements in its first column are nonzero. This result was never considered in [2] and, as will be evident in the sequel, its absence made the argument in [2] incomplete. Proof. Observe for some α = 0 Thus: Thus the i-th diagonal element of is λ 2 + αµ i where µ i is the magnitude square of the i-th element of T e 1 . Thus all elements of T e 1 have the same magnitude. As T is nonsingular no element of T e 1 can be zero. Also from (50) the off-diagonal elements of T ΛT H are α times the product of the elements of T e 1 and must be nonzero.
We now characterize Q, x 1 pairs that are admissible when N is even.
Because of (b) and (c) the denominators are non-zero. Thus as (a) holds, for r = s, n(θ r ± θ s ) is a multiple of 2π, i.e. θ r ± θ s is a multiple of 2π/n. The second type of off-diagonal elements on the left hand side of (47) are: for suitable {k, l} ⊂ {1, · · · , N }, k = l, and r ∈ {1, · · · , M }, Because of (c) the denominators are again non-zero. Thus as (a) holds, 2nθ r is a multiple of 2π. Thus (d) is also necessary.

OPTIMUM SENSOR PLACEMENT 375
To prove sufficiency we first note that as remarked before Lemma 5.3, (a-c) together with Theorem 5.2 assure that the x i in (30) do not inhabit an N − 1 dimensonal hyperplane. That the off-diagonal elements of the summation in (47) are zero follows by reversing the arguments proving the necessity of (d). That the diagonal elements in the sum are equal follows from (a) and Lemma 5.3.
The foregoing represents a complete characterization of Q when N = 2M , and n > N . The characterization is more complete than even the N = 2 provided in [14]. It is interesting the noncollinearity requirement mandates the use of Givens rotations for N = 2, in stark contrast to the N = 3 case considered presently, where rotation matrices preclude noncoplanarity.
We now show that for every M ≥ 1, a Q conforming to this characterization can be found. Recall that for N = 2 if a Q, x 1 pair is admissible, then so is Q, x for all x ∈ R 2 with norm r 2 . Call u 2 ∈ R 2 the vector of all ones. It is readily verified that H(r 2 u 2 ) has equal diagonal elements. Thus, with x 1 = r 2 u 2 , from Lemma 5.3, with W as in (43) W e 1 has both elements of equal magnitude. Consider U = T e below, Then U H W e 1 also has both elements of the same magnitude. Then the Q below satisfies all requirements of the Theorem 5.4 and forms an admissible pair with any x ∈ R 2 with magnitude r 2 . n This is thus the Givens rotation by 2π/n. Thus n-sensors are distributed equispaced on the circle centered at origin with radius r 2 . These are optimum spherical codes in R 2 in that these are a collection of n-points on a circle such that the minimum distance between them is the maximum possible. An illustration of the n = 4 case is in Figure 1(a), and is identical to solutions in [18] and [5].
As replacing any x i by −x i does not impair optimality, sensors equispaced on a semicircle are also optimum. More generally obeys (a) if z 1 = [1, · · · , 1] / √ N . Indeed in this case with: one obtains: The θ i chosen to be odd multiples of π/n will satisfy (d). For example one could choose for k ∈ {1, · · · , M } θ k = (2k − 1)π n .
(57) Given that there are M of these with n > 2M , one has for all k ∈ {1, · · · , M } that i.e. these satisfy (b-c) as well.
We now turn to odd values of N > 1.
The third type of off diagonal element is for suitable {k, l} ⊂ {1, · · · , N }, k = l, and r ∈ {1, · · · , M }, obey: Again because of (c) the denominator is non-zero, and thus for this element to be zero (e) must hold. Proof of sufficiency follows as in the proof of Theorem 5.4.
The n-th eigenvalue at -1 ensures that unlike in R 2 the optimizing Q is not a rotation matrix making these optimum solutions very different from the 2-dimensional case. For example, in R 3 , successive x i must flip across the x-y plane. As an example choose x 1 = [1, 1, 1] . Then all elements in the first column of U H W have equal magnitudes is U is in (42) and W satisfies (43). Then from (42)  This solution has the following attractive feature. All x i lie on the base of two inverted cones with axes parallel to the z-axis, and vertices at the origin. All the points on the base of the first cone have a z-coordinate that is the negative of those on the base of the second cone. The rims of the bases lie on the sphere of radius r 2 . The x i 's alternate between the two bases. Each x i is further rotated by an angle of 2π/n parallel to the x − y plane, from the previous x i as shown in Figure 1(b).
More generally, for odd dimensions the pair    is admissible with Q e as in (56). 6. Simulation Results for the three dimensional case. In this section, we present the simulation results for the three-dimensional case.
We compare the average mean square error performance of the proposed optimum sensor placement using 4 sensors with that of non-coplanar arbitrarily placed four sensors as described below. We choose A = 2, the path loss co-efficient was chosen to be β = 2. Hundred source locations are chosen from a uniform distribution in a sphere of radius 0.7. The sensors are placed on the surface of a sphere of radius 1. For each source location and each value of the SNR, square of the localization error is averaged over 1000 iterations. The mean square error thus obtained is averaged over the 100 source locations. We use the following expression to evaluate the SNR We use the gradient descent method with cost function derived using the Maximum Likelihood function estimation for the source location. The gradient descent algorithm is initialized at the source location. The step size for the gradient descent method was 10 −3 and we restrict the number of iterations for optimization to     Figures 2, 3 and 4 evidently shows that the proposed sensor placement performs better than arbitrary placements, though the improvement is less pronounced at high SNRs.
7. Conclusion. We have characterized conditions for optimum sensor placement for localizing a hazardous source located inside a hypersphere centered at the origin with a known radius r 1 , with its location having a radially symmetric probability density function. We characterize sensor positions that do not lie on any (N − 1)-dimensional hyperplane and are at least r 2 > r 1 away from the center, while optimizing the FIM associated with localization from the received strength of a signal emitted by the source, that is subject to log normal shadowing. As radial symmetry of the probability density function precludes a unique optimizing solution we give necessary and sufficient conditions on [Q, x 1 ] pairs so that sensors placed at x i given by (30) avoid (N − 1)-dimensional hyperplanes and optimize. We have provided attractive geometric interpretations of these canonical solutions. Future work includes settings involving greater directionality in the signal emitted by the source, and settings where sources are located in geometries other than hyperspheres. Appendix.