DISPERSION WITH CONNECTIVITY IN WIRELESS MESH NETWORKS

. We study a multi-objective access point dispersion problem, where the conﬂicting objectives of maximizing the distance and maximizing the con- nectivity between the agents are considered with explicit coverage (or Quality of Service) constraints. We model the problem ﬁrst as a multi-objective model, and then, we consider the constrained single objective alternatives, which we propose to solve using three approaches: The ﬁrst approach is an optimal tree search algorithm, where bounds are used to prune the search tree. The second approach is a beam search heuristic, which is also used to provide lower bound for the ﬁrst approach. The third approach is a straightforward integer programming approach. We present an illustrative application of our solution approaches in a real wireless mesh network deployment problem.

1. Introduction. In this paper, we study a multi-objective access point dispersion problem, which can be used to model various applications. Here, the dispersion of the agents is not sufficient as the sole performance measure and there is a need for some interaction between the agents in form of mutual visibility or connectivity. In particular, we consider the deployment of a Wireless Mesh Network (WMN) with Quality of Service (QoS) constraints, as an example multi-objective dispersion problem that takes into account both the dispersion and the connectivity of the access points.
Wireless Mesh Networking has emerged as a key wireless technology for numerous applications, such as; broadband home networking, community and neighborhood networks, enterprise networking, building automation [1,3]. WMNs are designed with self-organization and self-configuration capabilities so that wireless mesh nodes can automatically establish links and dynamically maintain connectivity and also adapt to changes due to node/link failures or introduction of new nodes to the network. The mesh nodes are network access points, which can act both as hosts that transmit their own data to some destination nodes and as routers that forward the data received from their neighbor nodes. In addition, mesh nodes are designed with functions to discover their neighbors and perform optimal routing across the network, while providing similar QoS as that of wired networks. QoS provisioning has been a critical requirement for wireless networks since the third generation cellular systems [11].
Successful operation of WMNs relies significantly on how well the network is connected, how reliable the links are, and how efficiently the resources are utilized. All of these can be addressed by how well the network is initially deployed. A good WMN deployment may bring great benefits, such as; better network management, better performance and cost savings. In addition, covering an area with the least number of mesh nodes minimizes the overall network cost and the knowledge of an optimal mesh node placement can offer improved performance as well as guidelines for extensions to the network especially when it is not possible to re-deploy the entire network. The deployment of a WMN involves placing a number of mesh nodes; i.e., access points, in a given area, such that the minimum distance between any two access points and the total connectivity between the access points are maximized at the same time. In this problem, the dispersion of the access points is desirable as it lowers the equipment cost for providing an acceptable coverage for a given region, while the connectivity of the mesh structure must ensure that each pair of nodes is connected with maximum possible data transmission rate (or throughput). Additionally, due to heterogeneous traffic demands, it may be necessary to provide stronger coverage for certain service points, where the connected users expect to receive higher access rates as their QoS requirement.
Maximizing both objective functions at the same time is the main challenge for designing current networks, and it will become even more critical for the cloud radio access networks (C-RAN) of fifth generation (5G) systems, where all the intelligence and antenna, signal, data and network processing will be concentrated at the cloud base stations [27]. In the future, the terminals will get simpler with only sensors and wireless transceivers, especially for Internet of Things (IoT) applications [32]. Hence, the main cost of the wireless systems will be due to the capital expenditure cost for building the infrastructure, which can be minimized by maximizing the dispersion. Maximizing dispersion also ensures that interference between nodes is controlled. This way, the quality of each link can be specified in terms of Signal-to-Noise Ratio (SNR), which is used to calculate the other metric to be maximized, the connectivity or the throughput as studied in this paper. In applications, such as surveillance, where the captured and relayed data over the WMN represent video or high quality images, connectivity maximization is critical.
The multi-objective formulation presented in this paper allows combining both locating the multiple access points and maximizing the multiple objectives, namely dispersion and connectivity. Our formulation also handles additional coverage constraints for providing desired QoS at certain service points for a WMN. Furthermore, we determine the locations of all access points at once and in case of an extension for an existing network, the location problem can be solved by considering the previously located access points as fixed. Specifically, in our mathematical formulation, the access points are considered for candidate locations, such that they are dispersed to serve a given site while the connectivity among them is maximized. We propose to obtain the Pareto optimal solutions for this multi-objective problem, after which we model two single-objective problems with constraints on distance and connectivity. Then, we present two approaches for solving the resulting problems: The first approach is a tree search with bounding (TSWB) that guarantees optimality at the expense of computational complexity. As an alternate approach, we propose a beam search heuristic. Our computational study shows that the first approach is guaranteed to find the optimal solution at the expense of high computation time for very large-scale problems. Our second solution approach, the beam search heuristic, provides a fast solution, that can be used as an initial feasible solution within the tree search approach to prune a significant portion of the search tree. The application in a real indoor WMN deployment scenario has also shown that this heuristic can provide a good solution in a very short computation time even in the presence of QoS constraints. We also compare our results with those obtained by solving an exact integer programming formulation. Our empirical study shows that TSWB is significantly faster than the integer programming approach as well.
The rest of the paper is organized as follows: In Section 2, we provide a review of the related literature. We present our problem in Section 3. The proposed solution approaches, the TSWB and the beam search algorithms, are introduced in Section 4. Section 5 is reserved for the illustrative WMN deployment application along with computational results. In Section 6, we summarize our conclusions and discuss future research directions.
2. Related work. When viewed as a facility location problem, our problem and the pure dispersion problems have some common points with the maximum clique problems; a family of NP-hard problems from graph theory [30]. A clique in a graph is a subgraph, where all vertices are pairwise adjacent. The maximum clique problem tries to find the clique with the maximum number of vertices. The maximum weighted clique problem, on the other hand, aims at finding a clique of a fixed number of cardinality by maximizing a given weight function, which is usually the sum of the weights on the edges. In the dispersion context, this corresponds to dispersing the vertices (similar to access points) by maximizing the sum of the distances among them [10], or solving the maxsum dispersion problem. Macambira introduces a tabu-search heuristic for the maximum weighted subgraph problem [17]. Wood presents a branch-and-bound algorithm for the maximum clique problem [30]. Similarly, maxmin dispersion problem, where the minimum distance between the facilities is maximized, can also be seen as a maximum clique problem. A heuristic for the maxmin dispersion problem based on this idea can be found in [7]. The main difference between these works and our work is that in our problem each edge between two vertices has a pair of weights, which corresponds to the measure of connectivity and the distance between the vertices. Additionally, in our work, we consider QoS constraints in order to provide a better coverage of the area.
Anti-covering location problem is another type of dispersion problem, which is about opening the maximum number of facilities by ensuring a predefined distance among the facilities. Murray and Church study the anti-covering location problem via Lagrangian relaxation [22]. Brimkov et al. focus on a geometrical agent placement problem [4]. This problem is about placing the agents at the intersection points of line segments (streets of a city, corridors in a building, and so on), and the objective is to minimize the number of agents required to secure each line segment. The dispersion literature also involves several work on multi-objective problems, where the objectives are cost and political opposition to the facility construction [9,25]. Several studies focus on similar problems in the form of the semi-obnoxious facility location problems [25,20,21]. For instance, Skriver and Andersen work on locating a new airport in Denmark [28]. An airport is a semi-obnoxious facility as it should be close to the cities in order to minimize the transportation costs. At the same time it should not be too close to the cities because of the potential noise and pollution. The problem is studied for both planar and network cases. In the planar case, the authors divide the plane into smaller squares and by using upper and lower bounds on the conflicting objective functions, they eliminate some of these squares to reduce the search space. In the network case, cities are considered as the nodes of the network and the edges of the network represent the roads. The airport should be constructed on an edge since the disturbing effect would be higher in the cities; meanwhile cities have a certain demand depending on their population. Similar to the method used in the planar case, the authors divide the edges into smaller edges and determine the set of Pareto optimal locations by using lower and upper bounds. This problem is similar to ours except that there is only one facility (the airport) to be located and both objective functions depend on the distances and the population density. Melachrinoudis and Xanthopulos focus on a single facility location problem on a plane, which is also solved by decomposition of the plane to calculate the efficient solutions [20]. In an earlier study, Melachrinoudis describes another single facility location problem, where a new facility that is interacting with the existing ones should be located [21]. The author decomposes a non-convex bi-criteria problem into linear bi-criteria problems to find all the efficient regions in the plane. Another single facility semi-obnoxious location problem is studied in [12]. Having the same objective (minimizing transportation cost and minimizing the obnoxious effects of the facility to the population centers), the problem differs in the metric used to model the propagation of the obnoxious effects of the facility. The authors use an elliptic distance metric to simulate the impact of wind on the distribution of pollution. By using the mathematical properties of the objectives, the authors determine an efficient set on the graph, where the facility can be located. Some applications of semi-obnoxious facility location problems with different objective functions can be found in [23,24]. Both of these works consider location of a single facility, while in our problem multiple facilities are to be located simultaneously. For a detailed overview of the location problems, we refer the reader to [8].
The cover problem has been studied for deployment of mostly wireless sensor networks. A wireless sensor network management problem is studied in [29], where the sensor nodes in an area are activated in a rolling time horizon. The authors study two multi-objective problems that are similar to our problem, but they also include a planning horizon. The problems are based on the optimization of the performance subject to a cost constraint and the optimization of the cost subject to a performance constraint. The authors give a constrained dynamic programming formulation with a bounding scheme. The bound comes from solving an unconstrained dynamic program obtained by using Lagrangian duality. Furthermore, the authors use a tree search with pruning approach similar to ours, which selects a number of locations with best objective function values as candidate locations, and then these solutions are used as the candidate nodes for the next stage of the search. Khedr and Osamy study a minimum connected cover problem in wireless sensor networks in [15]. In a wireless sensor network, low-cost sensors are placed in a region so that the entire region is covered. In the paper, the authors focus on reducing redundant coverage of sensors to reduce the energy consumption, and providing a mobilityassisted coverage to improve the reliability of the network in case there is a failure. In [26], Rebai et al. study a grid coverage problem with connectivity constraints in a wireless sensor network. They develop an integer linear programming formulation and several heuristics in order to solve the problem. As they are dealing with sensors, their network contains a sink node that gathers all the information. Their objective is to minimize the number of sensors required to cover an area. However, they do not consider obstacles in their problem, unlike our work. Our work combines dispersion, connectivity and coverage in the WMN deployment problem. We consider QoS constraints with a realistic modeling of all losses, including obstacles such as walls, windows etc. We present novel solution methods to this problem and test the performance of our solutions on a real-life indoor deployment problem.
3. Problem definition. In the access point dispersion problem, we aim to maximize dispersion and connectivity between access points. Maximizing dispersion reduces the number of access points and the capital expenditure cost of the network infrastructure. In addition to that, it limits the interference between mesh nodes. In mesh networks, the nodes farther than two hops are placed so that they do not hear each other. This way, the quality of each link can be specified in terms of Signal-to-Noise Ratio (SNR), which is converted to the connectivity (i.e., throughput), the other metric that we want to maximize in this work.
In the access point dispersion problem, we are given a set of m candidate locations. The candidate points are fixed locations that are considered for deployment of n access points such that p service points are covered with a given QoS. The service points correspond to locations such as offices and classrooms that need to be connected to a wireless network. In addition to covering the service points, our aim is locating the access points on the candidate points such that they are dispersed to cover the site as well as the connectivity among the access points is maximized. To disperse the access points around a site, we maximize the minimum distance between any two access points; a common technique in the dispersion literature that is used especially in the obnoxious facility location problem; see [5] for a survey. The evaluation of the connectivity between the access points is usually problem dependent. We give the general mathematical model with an abstract connectivity function and elaborate on the connectivity function that we choose (which corresponds to connection quality) in Section 5.
We work with three matrices: The matrix D denotes the distance between the candidate locations, the matrix C denotes the connectivity between the candidate locations and the matrix E denotes the connectivity between the candidate locations and the service points. In the rest of this paper, we assume that D and C are symmetric. The binary vector x ∈ {0, 1} m denotes the location of the access point; i.e., if x i = 1 then one access point is placed in location i; otherwise, x i = 0.
The first term of the objective function is the minimum distance between any two access points The second component of the objective function is the total connectivity value between the access points given by For every service point k, we guarantee coverage with a given QoS threshold. For this purpose, we use the connectivity values between the candidate points and service points and find the access point that serves the service point with highest connectivity given by e k (x) denotes the maximum connectivity between the located access points and the specific service point k. We will require this value to be greater than a QoS threshold value denoted by e min for each service point. Note that, the service points can be located anywhere and they do not have to be restricted to access point locations.
The overall mathematical programming model for the multi-objective (vectorvalued) problem then becomes In this model, we maximize the (multi-)objective function, by locating n agents in the area and by satisfying QoS constraints for the service points. In our application, the connectivity values correspond to the probability of correct transmission and depend on several parameters including communication technique, link quality, and so on (see Section 5).
Clearly, the multi-objective model 1 is apt to the concept of Pareto optimality [14]. Two feasible solutionsx andx are Pareto optimal, if one does not dominate the other. As our objective is maximizing the two components of the objective function, we say thatx dominatesx, if d(x) ≥ d(x) and c(x) ≥ c(x). When non-dominated solutions are found in a bi-objective model, it is common to place the solutions on a two-dimensional curve, called the Pareto curve. On this curve, each axis gives the value of one of the objective functions (see Figure 3 for an example). However, selecting a configuration using the Pareto curve requires human interaction. Therefore, it is also common to move one of the objectives into the set of constraints and ensure that the value obtained with that objective function is bounded by a user-defined value. This is the well-known goal programming approach. Following the same steps, we first propose to focus directly on problem 1 to obtain all the Pareto optimal solutions. Then, we model two single-objective problems with bound constraints as in the goal programming approach. Formally, we obtain and maximize c(x), where c min , and d min are the user-defined parameters that correspond to the minimum allowed total connectivity and the minimum allowed distance between any two access points. Note that, in problem 2, the minimum distance between two access points is maximized by satisfying a minimum total connectivity level and the QoS constraints. In problem 3, the total connectivity betwen the agents is maximized by satisfying a minimum separation (dispersion) between them and respecting QoS constraints.
4. Solution approaches. We next propose two solution approaches based on tree search. The first approach is TSWB that can be used to solve optimally the multiobjective problem 1 as well as the constrained single-objective problems. In the worst case, however, tree search is an exhaustive approach. Therefore, we also propose a beam search algorithm as a fast heuristic alternative for large-scale problems. Moreover, the results obtained with the beam search or the multi-objective problem can be used to provide the initial solutions for the tree search approach. However, the performance of the heuristic approach may deteriorate when we add constraints to the model, such as the QoS constraints for service points.
One of the pioneering works in combining dynamic programming with branchand-bound is given in [19]. The authors combine these two methods to solve discrete mathematical programs. Their hybrid algorithm uses elimination by feasibility, domination and optimality. We also make use of these elimination rules. In our application, we use lower and upper bounds to prune the search tree. Carraway and Schmidt use a dynamic programming approach with a branch-and-bound strategy for the capital allocation problem with interdependent projects, which requires a memory feature in the states [6]. The authors develop upper and lower bounds for improving computation time and also memorization of the computed bounds. Our problem also requires the memory of past stages, which makes the problem harder as two incomplete solutions cannot dominate each other.
An idea similar to the beam search here is given in [18], where the authors apply this idea to the time dependent traveling salesman problem. In this application, at each stage a certain number of solutions (user defined parameter) are carried as candidate solutions to the next stage. This approach does not guarantee optimality but is more space-and time-efficient compared to an unrestricted dynamic programming method. Furthermore, this approach gives more flexibility to incorporate precedence relations or time dependence than the linear or nonlinear programming formulations and heuristics. 4.1. Tree search with bounding. Our first approach is TSWB. This approach is illustrated in Figure 1. The root node of the tree starts with the empty set, and the k th level of the tree consists of cardinality-k subsets of {1, · · · , m}. The leaf nodes will be at the n th level of the tree. Since we generate all cardinality k subsets of {1, · · · , m} at each level, there will be n k=0 m!/(k!(n − k)!) nodes in the full tree.
Although the tree search approach can be exhaustive, the computational time can be significantly improved by using a bounding scheme. In Figure 1, β d U represents the upper bound on the distance function d(·) and β c U is the upper bound on the connectivity function c(·) at that node. The pair (β d I , β c I ) corresponds to one of the non-dominated solutions and this pair plays the role of the incumbent bound as in the well-known branch-and-bound scheme, [16]. The parameter β e U , defined as the upper bound vector for the function e(·), is also used to fathom nodes, if a service point cannot receive the necessary service level but is omitted in Figure 1 for a clear exposition. A more detailed analysis of the usage of these bounds is given later. Note that, when the cardinality of a solution is n, those upper bounds correspond to the actual value of the functions as we reach a leaf node of the search tree.
To traverse the tree, we apply depth-first search strategy using simple stacks. Using depth-first search helps us reach new lower bounds faster, and hence, makes the bounding mechanism more efficient. The steps of the tree search method with bounding is given in Algorithm 1. At the beginning, a set of Pareto optimal solutions are used to form the initial solution set N (line 2). These solutions can come from any heuristic, including the beam search heuristic discussed in Section 4. Determination of Upper Bounds. As shown in Algorithm 1, at each node the upper bounds for both objectives need to be calculated. The bounding works as follows: Notice that in the subsequent iterations the minimum distance value cannot increase, since adding other access points may only decrease the minimum distance. So the minimum distance value at node S gives the upper bound β d U (S) for the nodes branching from this node. On the other hand, obtaining the total connectivity bound β c U (S) is more involved. We can divide the calculation of this bound into summation of two parts. The first part consists of the total connectivity values between the located access points and the new access points, and the second part is the total connectivity between the new access points that are yet to be located. Suppose that we have already located q access points and the last access point is located at the candidate location r. For the first part, we must find n − q access points that give the maximum connectivity value with the already located access points. To avoid the symmetries we keep the solutions ordered so those n − q access points must be from the set {r + 1, · · · , m}. For the other component of the connectivity, we select the maximum (n−q)×(n−q−1) 2 connectivity values between the access points in the set {r + 1, · · · , m}. An illustrative example about finding the connectivity upper bound follows.  Figure 1. An illustration of the TSWB approach.
Example 1. Consider a problem with six access points and ten candidate locations. Suppose that three access points are already located in candidate locations {1, 3, 4}, so we proceed to place the remaining access points starting from location 5. At this stage, the connectivity matrix C is given by   if |S| < n then 13: stack.push(S) Node cannot be eliminated; add to the stack.

N ← update(N , S)
Update N as we reached a leaf node. 17: stack.pop() 18: Output: N can however increase depending on the connectivity values. To obtain an upper bound we simply choose the best connectivity value for a service point that can be obtained by placing an agent to a location that is yet to be considered. An example to calculate β e U (S) is given below. Example 2. Consider a problem with four access points, ten candidate locations, and eight service points. Suppose that three access points are already located in candidate locations {1, 3, 4}, so we proceed to place the remaining access points starting from location 5. At this stage, the connectivity matrix E is given by  2, and 4), which also gives a lower bound for the connectivity values of the service points. We, then, find the maximum connectivity value among the access points that are not located (from 5 to 10) and we select the largest value as the upper bound on the connectivity value of the service points. For the first and third service points, there is not a change in the connectivity value whereas for the second service point, the value increases from 0.66 to 0.91. If the quality threshold value e min = 0.8, then the partial solution is potentially feasible and we explore the children of that partial solution. However, if e min = 0.9 the service points 5 and 7 cannot be served adequately and we eliminate the solution using the feasibility constraint.

4.2.
Beam search. The beam search approach is basically a fast heuristic that can be applied to unconstrained problems. The main idea of this approach is based on storing, at every stage, at most a fixed number (W ) of feasible solutions that yield the W best objective function values. Therefore, a truncated search is conducted, since the width of the tree is W throughout the search. Notice that when W = 1, we obtain a greedy approach. This myopic approach does not necessarily yield the optimal solution. However, it is extremely fast and it can also be used to provide the initial solution in TSWB algorithm (Algorithm 1, line 1).
As obtaining a good feasible solution helps with bounding, we modified the algorithm to suit constrained problems as well. To obtain a solution for our constrained problems we can keep the algorithm unaltered and check the feasibility of the solutions we obtain at the very end. In our experiments this approach failed to yield feasible solutions for most of the problems. As a result of this failure, we have decided to integrate constraints to every step of the beam search. Independent of the problem we solve, the idea is to keep W best solutions that satisfy a certain constraint depending on the problem we solve. Algorithm 2 shows the details of the beam search heuristic. Here, the generic function v(·) is used to evaluate the objective function value of a solution. Similarly we use a generic u(·) to check the feasibility of the constraints. The set of W best feasible solutions at stage j is denoted by S j . The algorithm starts with the main loop (line 2) and places the first access point in one of the m locations and continues with the truncated tree search. Thus, the search is repeated by placing the first access point in each one of the m locations. If a feasible solution has a better objective function value than a solution in S j , then it is added to the set and the set is updated (lines 7 and 8). The algorithm ends with returning the solution with the maximum objective function value (line 9). Overall, the computational complexity of the beam search algorithm is of the order of O(W m 2 n).
In our models we have three types of limiting constraints: minimum distance between agents, total connectivity between the agents, QoS for service points. For the constraint on minimum distance, we require that the partial solution satisfies the minimum distance requirement or, d(s ∪ {k}) ≥ d min . For the constraint on total connectivity, we make sure that a partial solution has the average connectivity required. For a set of cardinality n the constraint becomes: . For the QoS constraints on the service points, we do not exercise feasibility check on every step of the algorithm but we check the solutions at the end to see if they respect the service constraints.
Algorithm 2 can be used within Algorithm 1 in two ways: In the first way, the solutions obtained with the beam search algorithm are fed to Algorithm 1 as the initial solution (or as the initial solution set for the multi-objective problem). To for j = 2 to n do 5: for s ∈ S j−1 do 6: for k = 1 to m do 7: if (v(s ∪ {k}) > min{v(t) | t ∈ S j }) and u(s ∪ {k}) then 8: update(S j , s ∪ {k}, W) 9: Output: arg max{v(t) | t ∈ S j } obtain this set, we run Algorithm 2 for varying W values and solve the two extreme cases; (i) neglect the distance values and maximize the total connectivity, (ii) neglect connectivity values and maximize the minimum distance. In the second way, we may integrate the beam search into the search tree of Algorithm 1 to obtain a better lower bound to fathom more branches. During our computational study we have implemented this idea for W = 1. Even though we were able to find better bounds, the solution time for a problem has increased because of excessive number of calls added by the beam search algorithm. Another important point is that increasing W may deteriorate the quality of the solutions, as the heuristic is myopic and increasing W may prevent selecting nodes at a stage that have promising branches in the subsequent iterations. Since it is very fast, in our applications we run the heuristic with different W values and take the best solution as the lower bound. 5. WMN deployment. We next study the application of the proposed models and the solution approaches in a real WMN node placement problem. In this problem, the connectivity corresponds to the accessibility between two wireless nodes. The connectivity metric is obtained by normalizing the throughput between the nodes, which depends on the parameters of the environment and on the properties of the network nodes that are to be located. In addition to dispersing the access points around the coverage area, we also ensure that the access points have a certain connectivity value among themselves and between the service points, where the demand should be satisfied with a given quality.
To obtain the connectivity matrix, we use the model given in [13], where the entries in the matrix represent connectivity values, calculated as the normalized throughput between the network access points. The resulting connectivity matrix is symmetric with values in the range [0, 1]. In this setting, we assume that after locating the nodes, the network we obtain becomes connected, and hence, further feasibility check is not necessary. Considering the quality of modern wireless equipment, this is a realistic assumption. Furthermore, we assume that transmissions from neighbor nodes (nodes that are one hop away from each other) share the channel via a contention based protocol (such as IEEE 802.11 WLAN protocol) for transmissions, and the nodes that are away from each other by two or more hops do not hear each other. This way, inter-node interference is not experienced during a transmission and the quality of each link can be specified in terms of Signal-to-Noise Ratio (SNR), which is converted to the connectivity metric, throughput, as explained next.
The connectivity between nodes is assessed by the SNR between them, which denotes the quality of the channel. For this purpose, we first calculate SN R 0 , or the reference SN R level, measured at a reference distance d 0 typically taken 1m. Parameter α is the path loss factor (exponent), specific to the propagation environment. Transmit power, the antenna gain of the transmitter and the antenna gain of the receiver are denoted by P t , G t , and G r , respectively. These parameters depend on the properties of the wireless equipment. The power of the received signal at d 0 is given by where λ is the wavelength of the transmitted signal. The calculation of SN R 0 depends on two other environment factors; the bandwidth, B and the noise level, N 0 . Formally, it is given by If there is no obstacle (or barrier) between points i and j, then the SN R value between the access points becomes where D ij is the distance between points i and j in meters. Typically, power or SN R values are expressed in logarithmic scale as 10 log(SN R ij ), measured in decibels (dBs). When there is an obstacle between two access points, the attenuation due to that obstacle is incorporated to the model [2]. Consider that the points i and j have K barriers between them with attenuation levels t 1 , t 2 , · · · , t K (dBs), then the total attenuation level between these two access points becomes The attenuation values depend on the material the obstacle or barrier is made of [2]. Using the total attenuation level, we obtain the SNR value of the channel between network access points i and j as Let q be the number of encoded bits (number of data bits to be transmitted in unit time) and let the size of the codeword be 32, then the code rate is obtained as q 32 and the success rate becomes where ε = 1 2+SN Rij denotes the bit-error probability, obtained as a function of SN R ij [2]. Finally, the connectivity value between two access points is found by testing the value for different code rate values and choosing the one that maximizes the normalized throughput per unit time. That is, As it can be inferred from 4, a network node can use different code rates for communicating with different nodes. In other words, we need to fix a code rate value for the node's communication with the possible nodes in each one of the candidate locations.

5.
1. An indoor WMN deployment scenario and results. As an example scenario, we aim to deploy the wireless network nodes in a floorplan based on the first floor of the Faculty of Engineering and Natural Sciences (FENS) building at Sabancı University. In this plan, there are 70 candidate locations (see Figure 2 for the floor plan and the candidate locations, marked with +) and 44 possible services points (marked with • in Figure 2). We use Algorithm 2 to provide an initial solution for Algorithm 1. Our computer programs are implemented in C++ on a machine with Intel i7-4600 2.1 GHz processor and 8 GB RAM running Windows 8 operating system. We have used the parameters in Table 1 for constructing the connectivity matrix. Notice that when we construct the connectivity matrix, we need to find the attenuation levels. This can be accomplished by checking whether the line between any two candidate locations crosses any barrier (e.g. walls, windows) in the site. In our calculations, we assumed that there is no attenuation if nodes and barriers lie on the same line. Considering that the relays have some volume this is not a very restrictive assumption. In the subsequent discussion, we use figures to demonstrate our results. In all of the floor plan figures, a solid circle shows a location selected by the TSWB algorithm and the distances in the figures are given in meters. Furthermore, we employed the greedy heuristic to have an initial feasible solution. The candidate locations are shown in Figure 2. In the computational study, we present the results we obtain with e min = 0.8.  Table 1. The problem parameters. Figure 3 shows the Pareto curve obtained for the multi-objective problem 1 using TSWB for n = 5. The Pareto curve is important because it helps us determine connectivity and distance constraints in single-objective problems. According to Figure 3(a), in the unconstrained problem, we can see that it is possible to increase the minimum distance to more than 10m without a significant decrease in the connectivity. On the other hand, increasing the minimum distance to 25m significantly decreases the connectivity. For the constrained problem with e min = 0.8, similar observations can be made with respect to the Pareto curve in Figure 3(b). Figure 4 illustrates the solution obtained for problem 2 with c min = 40 for n = 10. In Figure 4, the part shown with the dashed line is a barrier formed of meshed glass. It has less attenuation than the walls of the building and therefore it is preferable that wireless nodes are located around that region. The model places six nodes around the meshed glass, three nodes on the same corridor to increase connectivity and a node on a different corridor to satisfy the demands of the service points on that corridor.  Figure 5 shows solutions that we obtain by solving problem 2 for n = 5 with two different c min values. In Figure 5(a), the nodes are well dispersed around the building. Notice that to increase the total connectivity, three nodes are placed around the meshed glass. In Figure 5(b), the agents are not well distributed in the building as four of them are in the same corridor to satisfy the connectivity constraint as well as the service constraint. Figure 6 shows different configurations that we obtain by solving problem 3 for n = 5 with two different d min values. We can clearly see that the configurations in Figure 6 are quite similar and they differ significantly from the configuration in Figure 5(b). In Figure 5(b) the access points are clustered around the same corridor of the building, whereas in Figure 6 the dispersion becomes the primary concern due to higher d min value. Figure 7 shows our solutions again for problem 3 with two d min values, but this time n = 10. As the lower bound on the distance increases in Figure 7, we observe that the access points are forced to disperse along the outer corridors at the expense of decreasing total connectivity. Table 2 summarizes the results of the solutions considered in this section. We compare the time required to solve the integer programming formulation of our problem (see Appendix A) by using CPLEX, [31], with our approaches under 'Time' columns, where Time is reported in seconds. We also report the quality of the solutions obtained by beam search heuristic under 'Gap' column. In three instances, the beam search heuristic reaches the optimal solution. In the other three instances, the gap is still quite small. However in two problems, the heuristic provides a solution with 50% deviation. The heuristic is suitable, especially, for finding an initial solution to the problem. The time spent by the heuristic is negligible and is not reported. We also conclude that, except for one instance, TSWB is an order of magnitude faster than CPLEX.
The advantage of using TSWB over exhaustive search is the possibility of fathoming important number of branches in the solution tree. Figures 8, 9 and 10 show the number of solutions that we search and accept at each stage of TSWB. Figure 8   shows the number of possible and selected nodes at each level of the search tree for the multi-objective problem. Figure 8(a) shows the number of possible solutions, for which the objective functions are evaluated. Figure 8(b) shows the number of accepted solutions or the solutions that are not dominated by the Pareto optimal solutions. The problem is computationally intense as we are dealing with a multiobjective problem and elimination is harder. On the other hand, when we compare the results in Figure 8 with the total number of solutions, 70 10 ≈ 3.967 × 10 11 , we can conclude that the search space is significantly reduced. Figure 9 and 10 show the number of solutions that we search and accept at each stage of the tree search when we applied TSWB to problems given in Figure 4 and Figure 7(a). We report the details of these two problems because the numbers we obtain for the objective functions are quite close for these problems. Although our fathoming method is slow in the early levels of the tree, it becomes more efficient as we proceed even if we cannot improve the lower bound. These numbers indicate a significant improvement compared to the exhaustive search. In TSWB, if we have a good bound, then we can fathom large portions of the search tree. On the other hand, even when the initial bound is not very good, we observe that we can reach new tighter bounds rather quickly. In the worst case, TSWB boils down to the exhaustive search. Therefore, the effect of increasing n is worse than the effect of increasing m. The plots in Figure 9 show that the bounding becomes especially efficient after the fifth stage; i.e., locating the fifth access point. In Figure 9(a) there are almost half a million candidate solutions for the fifth stage but only a tenth of them are accepted as candidate solutions for the next stage as shown in Figure 9(b). We also observe that the performance of the proposed approach depends on the tightness of the bound constraint. Tighter constraints help reduce the size of the search tree. It takes less than half a minute to solve this problem. However, if we solve the same problem with c min = 35, then it takes more than 2.5 minutes to solve the resulting problem.

Instance
Beam     Figure 10 shows the number of solutions that we search and accept at each stage of the tree search when Algorithm 1 applied to problem given in Figure 7(a). The cardinality of the set of all solutions is the same as in Figure 9, since the values for m and n do not change. However, the number of searched nodes in Figure  10(a) is much higher than the number in Figure 9(a). On the other hand, when we compare the results in Figure 10(b) with the results in Figure 9(b), we observe that the increase in the number of accepted nodes is not as drastic as the increase in the number of searched nodes. This is because the problem in Figure 7(a) does not benefit from the tighter bound like the problem in Figure 4.
Throughout the computational study in this section, we observe that solving problem 2 with a tight connectivity constraint is easier than solving problem 3. This is due to the fast reduction of the search space. When we use a high connectivity value as a constraint, then there are much less feasible solutions as the bound can be easily tight for a site in a real-world application. On the other hand, if there   is a connectivity constraint with a small c min value, then the upper bounds on the connectivity value, in general, is not tight enough in the early stages of the tree. In such a case the problem becomes significantly harder. Therefore computation time is more affected by changes in the constraint tightness in the problem 2. Note also that the connectivity function c(·) is additive and the optimality bound resulting from this function improves slowly at each stage. However, the distance function d(·) is evaluated by taking the minimum over the distances between the locations. Therefore, we reach tighter bounds rather quickly. However, we must note that the problem 3 is less sensitive to changes in the constraints. We also noticed that changing the service constraint does not have an important effect on the computation time. The dispersion of the agents is good enough to ensure high service levels.
6. Conclusion and future research. In this paper, we set forth a new multiobjective location problem, where the objectives are related to the dispersion as well  the mutual connectivity of the access points. After presenting the multi-objective mathematical model, we also consider two single-objective problems obtained by a goal programming approach. For solving these problems, we propose two solution approaches. The first approach is a TSWB approach, which searches the whole tree and uses pruning to improve the computation time. This approach is guaranteed to find an optimal solution. However, we observe for very large problems that the computational time becomes high. The second approach is a beam search heuristic. This heuristic may not provide an optimal solution but may provide a feasible solution that can be used as a lower bound in the TSWB in order to reduce the computation time drastically. We also present an application in wireless mesh network design accompanied with some numerical experiments and a comparison with integer programming approach. This application shows that the proposed models and the associated solutions approaches are suitable for solving real-life problems. To  the best of our knowledge, this is the first work that combines coverage and connectivity with QoS constraints (for service points), while considering obstacles in the environment. Our computational study shows that the optimal solution approach we present is suitable even for moderate size instances. Using a good upper bound is very important for pruning the search tree and we believe that there is room for improvement in this area. Along the same line, we also plan to consider different heuristics to obtain better lower bounds. However,  in our experiments we observed that the obtained lower bounds are already quite good and hence, the gain in that area could be marginal.
Appendix A. Integer linear programming formulation. In this section, we present an integer linear programming formulation for problems 2 and 3. We first define for j > i, the auxiliary decision variables y ij = 1, if two agents are located to candidate points i and j respectively; 0, otherwise.  This variable is used to linearize the quadratic terms in the objective functions and the constraints. We also define a nonnegative decision variable d representing the minimum distance between two agents. Then, the integer linear programming formulation for problem 2 becomes maximize d, subject to d ≤ D ij y ij + M (1 − y ij ), 1 ≤ i < j ≤ m, 782 BIROL YÜCEOGLU, Ş.İLKER BIRBIL ANDÖZGÜR GÜRBÜZ i∈E k x i E ik ≥ 1, ∀k ∈ {1, . . . p}, x i + x j ≤ 1 + y ij , x i ∈ {0, 1}, ∀i ∈ {1, . . . , m}, where E k = {i | i = 1, . . . , m; E ik ≥ e min }, i.e. E k corresponds to the set of candidate points that can serve service point k. In the objective function 5, we maximize the minimum distance between any two agents. We restrict the total connectivity to be greater than c min by constraint 6. In constraints 7, M corresponds to a large number. For any i and j, if y ij = 0, then the constraint is not relevant. If y ij = 1, then the minimum distance cannot be greater than D ij . With constraints 8, we guarantee that each service point is served with a certain QoS by at least one agent. Constraints 9-11 make sure that y ij = 1 if and only if x i = x j = 1. We limit the number of agents that we locate to n with constraint 12. Constraints 13 and 14 are the domain constraints for the variables. For problem 3, the integer linear programming formulation can be written as subject to d ≥ d min , 7 − 14.
This time, we maximize the total connectivity, by equation 15. With constraint 16, we guarantee that the agents satisfy the minimum distance restriction.