Optimal threshold control of a retrial queueing system with finite buffer

In this paper, we analyze the optimal control of a retrial queueing system with finite buffer K . At any decision epoch, if the buffer is full, the controller have to make two decisions: one is for the new arrivals, to decide whether they are allowed to join the orbit or not (admission control); the other one is for the repeated customers, to decide whether they are allowed to get back to the orbit or not (retrial control). The problems are constructed as a Markov decision process. We show that the optimal policy has a threshold-type structure and the thresholds are monotonic in operating parameters and various cost parameters. Furthermore, based on the structure of the optimal policy, we construct a performance evaluation model for computing efficiently the thresholds. The expression of the expected cost is given by solving the quasi-birth-and-death (QBD) process. Finally, we provide some numerical results to illustrate the impact of different parameters on the optimal policy and average cost.


1.
Introduction. Queueing systems in which customers encountering all servers busy or the full buffer of the waiting space will have to join a group of unsatisfied customers, called the orbit, to retry for service after a period of time are called retrial queues (see Artalejo [3]). Results in the analysis of traditional or so called non-controllable retrial queueing models with all kinds of policies (e.g. [2], [24] and [25]) have already been achieved. Recently the optimal control of queueing systems and inventory systems are paid more and more attentions, especially the dynamic control problems in the queueing system. The controllable parallel queues and tandem queues with multiple customers and feedback policy have been studied (e.g. [1], [8] and [10]), while little work has appeared concerning the controllable retrial queues. Controllable retrial queues are widely used as mathematical models of several computer systems, telecommunication networks and inventory-production systems. In our paper, we discuss a retrial queueing system with finite buffer where the arriving customers join the buffer immediately only if the number of customers in the buffer (include the servicing customer) are less than K otherwise he joins the orbit and customers in the orbit follow the constant retrial policy. Notable examples are in [17] and [23].
The study of the controllable retrial queueing system is motivated by its wide applications in computer systems and inventory production systems. Considering the retrial queueing systems with finite buffer involving the control of admission and retrial, such models are applicable to congestion control of buffers arising in packet switching networks and application for controlling the energy consumption of a computing server. Concretely, a customer is typically a message, packet, command in a computer system and the computing node such as the CPU can be regarded as a server or service station where the controller can make a decision whether a packet is sent into an orbit waiting for service or rejected immediately. The energy consumption per unit time is related to the lever of the workload in the system (see [27] and references therein). Moreover, In an inventory management system with retrial customers and a service facility using one item of inventory for each service, the results of the controllable retrial queueing system can be applied to improve the management of the system, which can help the manufacturers take the optimal control policy to minimize the expected discounted cost.
Optimal admission control problems are widely discussed in many different models, such as the queueing system, production inventory system and the equilibrium game model and so on. Optimal admission control problems were first studied by Heyman [14]. Stidham and Weber [21] made a comprehensive survey of early papers on admission control in queueing systems. Some recent works on optimal admission control problems based on queueing models include Yoon and Lewis [26] and Son [20]. Hajek [12] and Helm and Waldmann [13] provided an explicit characterization of optimal control as a threshold policy. In this paper, the retrial control is similar to the admission control, while they just correspond to the different objects. We show that the optimal policy has also a threshold-type structure.
Some works related to our present paper are as follows. Benjaafar et al. [5] considered the optimal control of a production-inventory system with customer impatience where they characterized analytically the sensitivity of thresholds to operating parameters. Breuer et al. [6] studied the threshold policies for controlled retrial queues with heterogeneous servers where they mainly discussed the control of new customers to one of the idle servers or to the orbit if it is not full and a numerical procedure for an optimal control is proposed based on the Howard's iteration algorithm. However, different from the wide application of the retrial queueing system, few of papers about the optimal control of a retrial queueing system have been studied. We discuss the admission control and the retrial control simultaneously in our model. In addition, before our work many papers about Markov decision problem in the queues take the Howard's iteration procedure to obtain numerical results. We mainly analyze the properties of the optimal policy and compute the thresholds by constructing a performance evaluation model based on the structure of the optimal policy and QBD process. To the best of our knowledge, our paper is the first to apply the method of constructing a performance evaluation model for computing the optimal control thresholds of the retrial queueing system.
The rest of the paper is organized as follows. The model description in detail is given in Section 2. We derive the structural properties of the optimal policy in Section 3. A performance evaluation model for computing the thresholds is established in Section 4 where we formulate the model as a two-dimensional Markov chain. In Section 5, we give some numerical examples. Finally, some further discussions and conclusions are provided in Section 6.
2. Model description. We consider a single-server FCFS retrial queueing system model with finite buffer and infinite orbit where customers arrive at the system according to a Poisson process with rate λ. The service times of all customers are independent and exponentially distributed with parameter µ. The capacity of the buffer (including that of customers who are being serviced) has an upper bound K. Upon the arrival, the customer will immediately join the buffer if the buffer is not full, otherwise the customer will be accepted into the orbit and become a repeated customer or rejected. The customers who enter into the system will bring reward r while each repeated customer in the orbit incurs a holding cost h per unit time. It is assumed that the times between the successive retrials of customers are exponentially distributed with parameter ξ, which is independent of the number of customers in the orbit (see [16]). The interarrival times, the service times and the retrial times are assumed to be mutually independent. When the retrial arrivals take place, they will immediately join the buffer if the buffer is not full otherwise they either return back to the orbit or leave the system with a cancellation cost c. In order to avoid simple case, we assume that the reward is bigger than the cancellation cost (r ≥ c) and the minimum total holding cost of a repeated customer is smaller than the cancellation cost (c ≥ h/ξ). Otherwise, the admission threshold is equal to the retrial threshold and it is always optimal for the controller to make the repeated customer who encounters the full buffer out of the system. For more details, we can see in the following section.
The set of decision epochs is composed of the set of all the arrival times of new or retrial customers. At the arrival times of the new customers, the controller must decide whether or not to accept the customers to the orbit if the buffer is full. When retrial arrivals take place and the buffer is full, the controller must determine how to handle the incoming retrial customers, they are either back to the orbit or leave the system immediately. Let X i (t), i = 1, 2 denote the number of customers in the orbit and buffer (including the serving customer) respectively. The state of the system at time t can be described by {X(t) = (X 1 (t), X 2 (t)), t ≥ 0}. The system state space is E = {(x 1 , x 2 ) | x 1 ∈ N, x 2 = 0, 1, 2, ..., K} with N = {0, 1, 2, ...}. We consider the stationary Markov policy π under which the system evolves as a continuous-time Markov chain. Due to the Markovian property, the optimal policy depends only on the current state of the system.
Let v π (x 1 , x 2 ) denote the expected discounted cost over infinite horizon under a policy π and a starting state (x 1 , x 2 ). The goal of the controller is to get an optimal policy π * based on the number of customers in the system that minimizes the infinite horizon expected discounted cost. Using the standard tools of uniformization and normalization, we cast the problem as a Markov decision problem and construct a discrete-time equivalent of our original queueing system. Without loss of generality, we refer to the optimal value function as v = v π * and assume that λ + µ + ξ + α = 1 where α > 0 is the discount rate. As is shown in Puterman [18], the optimal policy π * and the optimal value function v are the solutions of the optimality equation: where the dynamic programming operators T adm and T ret acting on v , defined as follows: Under our model and control strategy, the first operator T adm models the admission control strategy for the new arriving customer and the second operator T ret corresponds to the retrial control strategy for the repeated customer.
3. Structure of the optimal policy. In this section, we focus on deriving the properties of the optimal policy which will provide basal insight for us to find the optimal policy with less computational effort due to a reduction of the solution search space. In order to do so, we first show that the optimal value function v(x 1 , x 2 ) satisfies some properties as specified in the following lemma.
The proof of Lemma 3.1 is given in Appendix. In this proof, we mainly take the induction method. We first prove that the operators T adm and T ret preserve the above properties which are defined in Ç il et al. [9]. Then we obtain the properties of the optimal value function by recursively defining v n+1 = T v n for arbitrary v 0 and the actions converge to the optimal policy as n → ∞. For existence and convergence of the value function and optimal policy, we can see more details in the works of Aviv and Federgruen [4] and Sennott [19]. Now, we can get the following properties of the optimal policy by the above properties of the value function.
Theorem 3.1. The optimal policy is a threshold policy with two thresholds m and n. For the admission control, when the buffer is full it is optimal to accept the new arriving customer into the orbit if x 1 < m and otherwise reject. For the retrial control, when the buffer is full it is optimal to let the repeated customer back to the orbit if x 1 < n and otherwise make the repeated customer out of the system. Furthermore, the two thresholds is defined as follows: m = min{x 1 : ∆ 1 v(x 1 , K) ≥ r} and n = min{m, min{x 1 : Proof. From the definition of the operators T adm and T ret , Lemma 3.1 shows that the optimal policy is a threshold policy and guarantees the existence of the thresholds m and n. Moreover, when the buffer is full, it contains that it is optimal to accept the new arriving customer into the orbit if x 1 < m and reject otherwise. For the retrial control, when the buffer is full, it is optimal to let the repeated customer back to the orbit if x 1 < n and otherwise make the repeated customer out of the system.
Next, we discuss the monotonicity properties for the two thresholds m and n with respect to various system parameters. Similarly to the method in Benjaafar et al. [5] and Ç il et al. [9], we compare the optimal value functions of two systems which are identical except for the value of one system parameter, denoted by q. The optimal admission threshold, retrial threshold and value function corresponding to a given system parameter q will be represented by m q , n q and v q (x 1 , x 2 ), respectively, where q ∈ {λ, µ, ξ, r, c, h}.
In order to derive the monotonicity properties for the two thresholds, we should first study the properties of the value function v q (x 1 , x 2 ) in two system with different parameter q. Of particular note is Koole [15]. The uniformization rate depends on {λ, µ, ξ} and needs to be constant for two systems to be comparable. We rescale the time using an uniformization rate τ which is sufficiently larger than the λ+µ+ξ +α to have the same uniformization rate with parameter values q and q+ . To maintain a constant uniformization rate, the fictitious event in the two system is τ − q and τ −q− respectively. For instance, if we consider the parameter q = µ, the optimality equations of two systems with parameter µ and µ + respectively are as follows: where the operators T adm and T ret are defined in the previous section.
Using this method, we can get the following lemma of which proof is given in Appendix.

Lemma 3.2.
For the optimal value function v q (x 1 , x 2 ) of the two systems with different parameters q, we have: Based on the above properties of optimal value function, we derive the structure of the optimal policy by analyzing the impact of various system parameters on the thresholds. The following theorem summarizes the policy implications of the results: Theorem 3.2. In the system control problems, the optimal admission threshold m is non-decreasing in r, ξ and is non-increasing in h, c, λ. The optimal retrial threshold n is non-decreasing in c, ξ and is non-increasing in r, h, λ.
Proof. From Theorem 3.1, the admission threshold is defined as m = min{x 1 : By the method of comparison above, Property 2 and Property 3 in Lemma 3.2, we know that Hence we can obviously get that admission threshold m is non-increasing in h, c, λ. For the properties of the retrial threshold n, we can obtain the result that the retrial threshold n is non-decreasing in c, ξ and is non-increasing in r, h, λ in the similar way.
Note that we analyze the monotonicity results of the optimal thresholds with respect to all the system parameters except the service rate µ. However, in Section 5, we will give the numerical results to show the impact of the service rate on the thresholds and average cost. 4. A performance evaluation model. The structure of the optimal policy in Section 3 can be shown to continue to hold in the case where the optimization criterion is the long-run average cost per unit time instead of the expected discounted cost. As is shown in the paper by Cavazos-Cadena and Sennott [7], the existence of an optimal policy for the average cost, and for this average cost to be finite and independent of the starting state, can be proven via an argument involving taking the limit as the discount rate α → 0 in the queueing system with the discounted cost criterion.
In this section, based on the structure of the optimal policy, we formulate a performance evaluation model for computing efficiently the optimal admission threshold and retrial threshold under the average cost criterion. Different from the value iteration algorithm and dynamic programming algorithm which depend on the problem parameter values and require truncation of the state space, the performance evaluation method which makes the best of the characteristic of the optimal policy is more efficiently for the threshold policy model.
The approach that we take follows from the recognition that a system operating under a control policy specified by fixed thresholds m, n can be modeled as a Markov chain. In particular, the system state in our model .., m, x 2 = 0, 1, 2, ..., K} evolves as a two-dimension continuous time Markov chain for any choice of thresholds where X i (t), i = 1, 2, denote the number of customers in the orbit and buffer (including the serving customer) respectively. Then the infinitesimal generator Q of this system is given by 2 · · · n n + 1 · · · m m + 1 which is of order m + 1 and the matrices A 0 , A 1 , A 2 , A 3 , B 0 , B 1 , C 0 are square matrices of order K + 1 which are represented by, As is shown above, our model has the standard structure of a QBD process with infinitesimal generator Q. We could use the matrix-geometric solution. Here, we briefly outline the computation of the steady state system state probability. Let X denote the steady-state probability vector of the generator Q. For the finite states, the stationary distribution of the Markov chain {X(t), t ≥ 0} is always exist. Using the efficient method developed by Gaver et al. [11], we obtain the iterative solution of the steady state system state probability in the following theorem: Theorem 4.1. The stationary probabilities satisfy the relationship: Proof. We can refer to Gaver et al. [11] for the proof.
From Theorem 4.1, the stationary probabilities are calculated as follows: Step 2. Compute X m R m = 0, and Σ m i=0 X i e = 1; .., x iK ), i = 0, 1, 2, ..., m and e is column vector of 1's with appropriate order. Remark 1. In this paper, we consider a retrial queue with finite buffer and a single server. While based on the characterization of the method used in this paper, it also can be used to the case of multiserver retrial queue M/M/K. Specifically, there are K servers who provide service to customers. A customer, who finds at least one free server, starts service immediately whereas a customer who finds all the servers busy goes into an orbit and retries for service. Customers in the retrial group request service with constant retrial rate, which is independent of the number of customers in the retrial group. The same control problem in the multiple server model can be analyzed by the method based on the following two key points.
• In the multiserver retrial queue model, the optimality equation (1) becomes To review the induction method, it is easy to find that the value function in the multiserver model also satisfies the properties in Lemma 3.1 which ensure that the optimal policy has the threshold structure.
• As is described in the multiserver model, the model has the standard structure of a QBD process under a fixed threshold policy and the infinitesimal generator Q of the process has the same form with the present model so that the steadystate probability of the process can be obtained by the method in Theorem 4.1. Furthermore, the block matrices in the Q matrix of the multiserver model are as follows B 0 = B 0 , B 1 = B 1 , C 0 = C 0 and In our model, the expected discounted cost (the sum of the lost customers, the repeated holding and the cancellation costs) over a finite time T under a policy π and an initial state x can be written as: where M (t) denotes the number of the customers that have been rejected up to time t and N (t) denotes the number of repeated customer which have been cancelled with the price c up to time t. Hence, for any fixed admission and retrial thresholds m and n, the process {X(t), t ≥ 0} is an ergodic Markov chain. As it is known from Tijms [22], the long-run average cost per unit of time for the threshold policy (m, n) in our model can be written as the following form, where r((i, j), (m, n)) is the expected cost when the system is in state (i, j) under the fixed threshold policy (m, n) and x ij is the steady-state probability of the state (i, j) ∈ E.
As can be seen, the above expression involves the sum of finite terms. Therefore, the minimum average cost g * = min m,n∈N+ g(m, n) with N + = {1, 2, ...} can be computed efficiently. In addition, the optimal values for m and n can be obtained via an exhaustive search over a large enough range of n and m. Compared with the dynamic programming method and Howard's iteration algorithm, the computational effort for carrying out this search is generally modest and convenient. For example, a search over a 100 by 100 grid takes only few seconds on a standard personal computer. In the case of experiment with λ = 1, µ = 1, ξ = 0.5, h = 0.5, r = 35, c = 30, k = 20, the present method takes 8.863 seconds, while the modified policy iteration algorithm (see [18]) spends 15.257 seconds which is almost twice than the present method.
It is worthwhile to note that the method proposed in this paper can be applied to the class of models, such as retrial queueing system, polling system and tandem queueing system with impatient or feedback, multiple customers, especially for the admission control in the queueing systems and optimal control of the inventory production systems (see [6], [5] and [10]). Specifically, as the technique works in this paper, it can be applied in the models which satisfy the following conditions: • Firstly, the value function is convex or unimin (defined in [12]) which provides an explicit characterization of optimal control as a threshold policy. • Secondly, the system can be analyzed as a QBD process under the threshold structure policy so that the steady state probabilities can be obtained by using balance equation or matrix-geometric technique.

Numerical examples.
In this section, based on the results obtained, we briefly provide some numerical examples in some cases that examine the sensitivity of the admission threshold and retrial threshold to system parameters. The numerical experiments and the following figures, on one hand, provide the direct support for the monotonicity of the optimal thresholds. On the other hand, we also give some numerical results to show the impact of customers' arrival rate λ, service rate µ and retrial rate ξ on the optimal average cost and the optimal thresholds. As is shown in the tables and figures, we can make the following observations. From Table 1-Table 3, we present results that show the sensitivity of the optimal admission threshold and retrial threshold to system parameters λ, ξ and µ. We find      that the optimal thresholds decrease as the rate λ increases, while increase as the parameters ξ and µ increase for the cases K = 1, 5, 10, 15. When these parameters change within certain ranges, the optimal thresholds remain constant which shows a staircase-like increasing or decreasing pattern. In addition, the average cost increases as the rate λ increases, while decreases as the parameters ξ and µ increase. It is noted in these tables that for fixed parameters λ, µ, ξ, h, r, c the optimal thresholds m, n increase and the optimal average cost g * decreases with the capacity of the buffer K increasing. For some special case the optimal thresholds remain the same under the varying parameter K. This behavior is consistent with the reality and is easy to explain. For example, an increase in retrial rate ξ implies that the retrial time decreases, so the repeated customers in the orbit will spend less time in waiting for service and have more chance to retrial successfully. Hence, the optimal thresholds increase as the parameters ξ increases. In terms of the overall situation, the average cost will decrease due to the average holding cost of the customers in the orbit.
In Figure 1-Figure 3, we describe the characteristics of the optimal thresholds in association with the value of holding cost h, reward r and cancellation cost c for the cases of K = 1, 10. As we can see from Figure 1, holding cost parameter h has a significant impact on the optimal thresholds, especial for the larger K. Specifically, the optimal threshold decrease with h increasing. From Figure 2, we obtain that as the reward parameter r increases, the optimal admission threshold m increases while the retrial threshold n decreases. However, Figure 3 shows that the cancellation cost parameter c has the opposite impact on the optimal thresholds. All of these figures show a staircase-like monotonous pattern. In particular, the optimal admission threshold is equal to the retrial threshold if the condition r = c holds, which also illustrates that the retrial control is similar to the admission control, while they just correspond to the different objects. 6. Conclusion. In this paper we analyze the optimal threshold policy in the retrial queue with finite buffer. The optimal control policy for the customers is of threshold type in our model. Using the sufficiently large uniformization rate, we obtain the monotonicity of the optimal thresholds to system parameters by the comparative method of the value functions. It is difficult to derive explicit formulas for the threshold levels. However, different from the classical Howard's iteration algorithm method, we construct a performance evaluation model based on the structure of the optimal policy for computing the thresholds and analyze the behaviour of optimal control policies when the values of the system parameters are varied by the numerical experiments. Furthermore, we analyze the characterization of our technique and provide the class of model that our technique can be applied.
From the above results there arise some interesting extensions of the model which we will study in the near future. One possible change is to consider systems where the customer arrival time, service time and retrial time distributions are phase-type distributions. Furthermore, we can apply the embedded Markov chain and semi-Markov decision processes to consider the queueing system in which the service time or retrial time is a general distribution. In addition, the method of our paper also can be promoted to some different queueing systems. Another way to generalize the model is to study some models with different strategies which can be used in more practical applications, such as the queueing systems with impatient customers and multiple priority customers.
Therefore, if the condition ∆ 1 v(x 1 , x 2 ) ≥ 0 holds for all (x 1 , x 2 ) ∈ E, we have ∆ 1 T adm v(x 1 , x 2 ) ≥ 0 and ∆ 1 T ret v(x 1 , x 2 ) ≥ 0, that is, the operators T adm and T ret preserve the increasing property of the function v(x 1 , x 2 ). Using the induction method and the iterative optimal equation v n+1 (x 1 , The proof of Lemma 3.1 (2) is similar to the proof of the increasing property of the value function v(x 1 , x 2 ).
Proof of Lemma 3.2. In order to prove the properties, we mainly use the fixed point theorem and the iterative induction method. We first show that the operators T adm and T ret have the following properties: Using the corresponding results from Ç il et al. [9] and Benjaafar et al. [5], for operator T adm , we have ∆ 1 T adm v q+ (x 1 , K) ≤ ∆ 1 T adm v q (x 1 , K) for q = ξ and ∆ 1 T adm v q+ (x 1 , K) ≥ ∆ 1 T adm v q (x 1 , K) for q ∈ {λ, h, c}. For operator T ret , we have ∆ 1 T ret v q+ (x 1 , K) ≤ ∆ 1 T ret v q (x 1 , K) for q ∈ {ξ, r} and ∆ 1 T ret v q+ (x 1 , K) ≥ ∆ 1 T ret v q (x 1 , K) for q ∈ {λ, h}. The above results take the main method of the comparing system and Koole [15] makes a comprehensive survey about properties of the value function and operator in Markov reward and decision chain. Now we need to prove the inequalities ∆ 1 T adm v q+ (x 1 , K) ≤ ∆ 1 T adm v q (x 1 , K) for q = r and ∆ 1 T ret v q+ (x 1 , K) ≥ ∆ 1 T ret v q (x 1 , K) for q = c. Considering the case q = c, it is slightly more complex since the operator T ret depends on c. From the definition of the operator T ret we get Because of the property ∆ 1 v c+ (x 1 , K) ≤ ∆ 1 v c (x 1 , K) + , we have n c ≤ n c+ . So we obtain that The above inequalities are based on assumption (i.e., ∆ 1 v c+ (x 1 , K) ≥ ∆ 1 v c (x 1 , K)) and the definition of the threshold n. Hence, we obtain ∆ 1 T ret v c+ (x 1 , K) ≥ ∆ 1 T ret v c (x 1 , K) and we can also get ∆ 1 T adm v r+ (x 1 , K) ≤ ∆ 1 T adm v r (x 1 , K) in the similar way.
To prove Property 1 and Property 2, using the preserve properties of the operators above and the induction method, we can easily get that K). While the coefficient of the operators T adm , T ret and the uniformization rate are dependent on the parameter q when q ∈ {λ, ξ}. We need to show the following property which will be used for the proof of Lemma 3.2. (c) For the operator T adm , from the definition of the operator T adm and the convexity property ∆ 1 v(x 1 + 1, K) ≥ ∆ 1 v(x 1 , K), we have the following cases: Then we have ∆ 1 T adm v(x 1 , K) − ∆ 1 v(x 1 , K) ≥ 0. Similarly to the proof of the operator T adm , we provide the proof of the operator T ret . From the definition of the operator T ret and the convexity property ∆ 1 v(x 1 + 1, K) ≥ ∆ 1 v(x 1 , K), we have the following cases: Hence we have ∆ 1 T ret v(x 1 , K) − ∆ 1 v(x 1 , K) ≤ 0. If q ∈ {λ, ξ}, the transition rates depend on q. Now we consider the argument for q = ξ by comparing the optimality equation in the different systems with retrial parameters ξ + and ξ, respectively. The optimality equations are given as follows: v ξ (x 1 , x 2 ) = 1 τ [λT adm v ξ (x 1 , x 2 ) + ξT ret v ξ (x 1 , x 2 ) + µv ξ (x 1 , x 2 − 1) Using the properties (b) and (c) above, we have the following inequality: Therefore, we have ∆ 1 v ξ+ (x 1 , K) ≤ ∆ 1 v ξ (x 1 , K). Meanwhile, using the properties (a) and (c), we can get ∆ 1 v λ+ (x 1 , K) ≤ ∆ 1 v λ (x 1 , K) in the same way.
To prove Property 3, we first consider the argument for q = c. From the optimality equation, we have: Next, we show that the functions ∆ 1 T ret v c (x 1 , K)−c and ∆ 1 T adm v c (x 1 , K)−c are non-increasing in c. Since the function v(x 1 , K) satisfies Property 2 and Property 3, we have ∆ 1 v c+ (x 1 , K) ≤ ∆ 1 v c (x 1 , K) + for all x 1 ∈ N and n c ≤ n c+ . Hence, we have the following inequalities: Then we get that ∆ 1 T ret v c (x 1 , K)−c is non-increasing in c, that is, ∆ 1 v c+ (x 1 , K) ≤ ∆ 1 v c (x 1 , K) + . For q = r, we can get ∆ 1 v r+ (x 1 , K) ≤ ∆ 1 v r (x 1 , K) + in the similar way.