SEMI-MARKOVIAN IN PRODUCTION NETWORK MODELS

. In this paper, we focus on production network models based on ordinary and partial diﬀerential equations that are coupled to semi-Markovian failure rates for the processor capacities. This modeling approach allows for intermediate capacity states in the range of total breakdown to full capacity, where operating and down times might be arbitrarily distributed. The mathematical challenge is to combine the theory of semi-Markovian processes within the framework of conservation laws. We show the existence and uniqueness of such stochastic network solutions, present a suitable simulation method and explain the link to the common queueing theory. A variety of numerical exam-ples emphasizes the characteristics of the proposed approach.


1.
Introduction. The mathematical consideration of production processes as well as supply chains implies a variety of modeling approaches. Generally they can be divided into two classes called microscopic and macroscopic models. The key idea of microscopic models is to track each item individually during production. Typical approaches are either based on discrete event simulations [1,2] or Newton-type dynamics [16]. In contrast, macroscopic models describe the average behavior of the production system in terms of part density or flow. They naturally arise from microscopic limits and are usually based on ordinary differential equations (ODEs) [15], hyperbolic partial differential equations (PDEs) [1,5,6,8,11] or a mixture of both [4,7,14]. We refer to [3] and the references therein for a comprehensive overview of macroscopic production models.
Up to now, most of the macroscopic systems are purely deterministic and stochastic effects are less investigated. However, production networks are influenced by random perturbations as for instance random capacities. In [10,17], exponentially distributed operating and down times are assumed meaning that a machine is either on and capable to work with full capacity or off leading to a total breakdown of production. Mathematically, the operating and down times can be seen as holding times in the operating and non-operating state of the machine such that the capacity process can be interpreted as a continuous time Markov Chain (CTMC).
The theory on CTMCs is well established and also widely used in queueing theory [20,23,24], where the distribution of arrival times and service times is not restricted to the exponential distribution only. A natural generalization of CTMCs are semi-Markov processes. These can be used to model and analyze reliability functions with semi-Markov failure rates processes, see e.g. [18,19]. The theory of semi-Markov processes with discrete state space are intensively discussed [19,24,27] and for more arbitrarily state spaces in [21].
In this work, we focus on a macroscopic model [14,17] and use the concept of semi-Markov processes to reformulate the capacity drop of machines by a capacity process which also allows for intermediate capacity states. Additionally, exponentially distributed operating and down times are replaced by arbitrarily distributed holding times for the corresponding capacity states. This leads to the extension of the original two-state capacity process to a multi-state process.
From a mathematical point of view, we have to fuse the theories of coupled PDE-ODE systems and semi-Markov processes in a way that the well-posedness of the resulting stochastic model is ensured. The numerical treatment and the interpretation of the simulation results compared to the common queueing theory approach are also discussed. A theoretical result as in [10] for the expected density is unfortunately not possible since a Lagrangian particle formulation is not available for network problems.
The paper is organized as follows: In section 2, the reformulation of the capacity drop will be achieved by defining the capacity processes (µ e (t), t ≥ 0) for all processors e in the network as a family of independent semi-Markov processes and include the latter in the original coupled PDE-ODE model [14]. The well-posedness of the model follows from the fact that the M -dimensional process ((µ 1 (t), . . . , µ M (t)), t ≥ 0) has only finitely many jumps in finite time intervals Palmost surely (P -a.s.). In section 3, we will use the independence of the stochastic processes µ 1 , . . . , µ M to simulate the model. This is done by splitting the generation of sample paths for the capacities and the numerical treatment of the PDE-ODE system, cf. [17]. Therein, a stochastic simulation algorithm [12,13] and the idea of piecewise-deterministic processes [9] are presented to solve coupled PDE-ODE problems. However, the simulation of sample paths relies on the Markov property of the embedded Markov process and stochastic numerical methods taken from [22]. To close the link between stochastic production network models and queueing theory we analyze expected queue-loads and work in progress for a single queue-processor unit in section 4. Further computational results and comparisons will be provided in section 5.
2. Stochastic production networks. The model under consideration is mainly based on [17] while the focus is on a different stochastic process for the machine breakdowns. More precisely, the capacity process originally defined by operating and non-operating states with exponentially distributed durations is replaced by a semi-Markov process. In doing so, there are two benefits: On the one hand, intermediate capacity states are allowed and, on the other hand, generalized distributions for the operating and down times can be applied.
We start with the framework of semi-Markov processes and explain in a second step how these processes can be used in the context of PDE-ODE coupled production networks.
2.1. Semi-Markov processes. The general idea of a semi-Markov process with discrete state space basically relies on a continuous-time Markov chain. In the following, we denote by (H n , n ∈ N) the holding times, i.e. the duration of the process being in the state X n−1 , and by (J n , n ∈ N 0 ) the jump times, i.e. the times at which the value of the process changes from X n−1 to X n . Then, using the convention J 0 = 0, holds for every n ∈ N 0 . The capacity process (µ(t), t ≥ 0) is thus piecewise-constant and right-continuous satisfying as figure 1 illustrates. To embed these modeling assumptions into the theory of semi-Markov processes, we present basic definitions and relevant results taken from [25] and [27]. We always assume a finite set of capacities and to ease the notation we use integer values as possible states of the process to capture different capacities, i.e., state k corresponds to capacity c k . Let m ∈ N be the number of states and Q : R → R m×m be a matrixvalued function. Each component function is assumed to be non-decreasing and right-continuous. Furthermore, the following properties should hold: Let us now define the so-called (X, H)-process, see [27].

SIMONE GÖTTLICH AND STEPHAN KNAPP
for a given initial distribution vector π and P -a.s. for every t ∈ R, n ∈ N 0 and every k = 1, . . . , m.
From equations (2)-(4) we can deduce that all L i are distribution functions and consequently describe the distribution of the holding times given the state of the process. The matrix Q implies the common transition probabilities of the holding time process (H n , n ∈ N 0 ) and the state process (X n , n ∈ N 0 ). Consequently, we can choose some matrix-valued function with entries given by weighted cumulative distribution functions (cdf's) such that the assumptions (2)-(4) are satisfied. Note that there is no specific choice of the cdf's and the exponential distribution forms a special case.
A relevant result for the numerical simulation of such processes is given by the following theorem, see [27]: Theorem 2.2. The stochastic process (X n , n ∈ N 0 ) is a Markov chain and the (X, J)-process ((X n , J n ), n ∈ N 0 ) with J n defined by (1) is a Markov process.
Using the considerations from figure 1, the capacity process is induced by the (X, H)-process. This is the so-called Z-process or semi-Markov process which we define as follows: Definition 2.3 (Semi-Markov process). Let a (X, H)-process on (Ω, A, P ) be given. We define the Z-process as a time-continuous stochastic process (Z(t), t ≥ 0) on (Ω, A, P ) with values in {1, . . . , m} by for every t ≥ 0. The process is called semi-Markov process.
The semi-Markov process (Z(t), t ≥ 0) is in general no Markov process but a characterization as an embedded Markov process in the sense of a (X, J)-process is straightforward.
2.2. Network definition and modeling equations. We briefly recall the PDE-ODE production network model from [14] and consider for simplicity the special case of a serial network without branches. The network consists of M queue-processor units e ∈ E = {1, . . . , M }, where each processor possesses a single queue (or buffer) in front. There is a given external inflow into the network at the first queue and the total outflow is considered as the output of processor M .
The queue of each processor is modeled by an ordinary differential equation (ODE) that measures the difference between incoming and outgoing flux. In detail, the queue-load at time t is denoted by q e (t) and computed as for an initial value q e 0 ∈ R ≥0 . If the queue is non-empty, the outflux g e out is determined by the capacity µ e (t) at time t of the associated processor e. Otherwise, if the queue is empty, the outflow is the minimum of the incoming flow g e in (t) and the capacity µ e (t): if q e (t) > 0, min{µ e (t), g e in (t)} if q e (t) = 0.
Every processor e ∈ E is characterized by its individual capacity µ e (t) and its processing velocity v e . The part density ρ : [a e , b e ] × R ≥0 → R ≥0 is governed by the conservation law where the flux function is and initial and boundary values are The flux into a queue is consequently with G in being the time-dependent inflow into the serial production network. We state the following Definition 2.4 (Stochastic production network model). Let {(µ e (t), t ≥ 0), e ∈ E} be a family of independent semi-Markov processes on some probability space (Ω, A, P ) having finite state spaces C e ⊂ R ≥0 . The stochastic production network model is then given by the equations (9)-(16) with capacities µ e (t), e ∈ E.
Analogously to the deterministic case, i.e., all processors are working with their maximal capacities µ e , we have to prove that the stochastic network model is welldefined. Therefore, the following definition is needed. Definition 2.5 (Solution). We call X = (µ e , ρ e , q e , e ∈ E) a solution of the stochastic production network model given by definition 2.4 if there exists a set N ∈ A with P (N ) = 0 such that for every ω ∈ N c the equations (9)-(16) hold in the sense of a deterministic network solution, see [3].
We obtain the following existence and uniqueness result.
Theorem 2.6 (Existence and uniqueness of a solution). Let a stochastic production network model be given. Furthermore, we assume that and Proof. The proof comprises two parts. First, we show that the M -dimensional stochastic process (µ 1 , . . . , µ M ) consists of finite jumps in finite time intervals Pa.s. Let N (t) be the number of jumps for the multidimensional process up to time t and let N e (t) be the number of jumps of the e−th semi-Markov process for some t ∈ [t 0 , T ] and for all e ∈ E. Due to the inequality for every n ∈ N 0 following from the independence of the processes N e . Since a semi-Markov process with finite state space only consists of finitely many jumps in finite time P -a.s., see e.g. [27], it follows that for every e ∈ E. Hence, we get which implies that the number of jumps of the multidimensional process are also P -a.s. finite in every finite time interval. Second, we can prove the existence of a solution by setting N ∈ A as the set of all events such that (µ 1 , . . . , µ M ) has infinitely many jumps in [t 0 , T ]. From the first part of the proof we know that P (N ) = 0. For every ω ∈ N c we obtain an integer K(ω) ∈ N and finitely many jump times {J 1 (ω), . . . , J K(ω) (ω)} such that we can construct the solution piecewise in-between. The main assumptions for the existence and uniqueness of solutions in the deterministic case [3] are TV [a e ,b e ] (ρ e 0 ) < ∞, TV [t0,T ] (G in ) < ∞ and are also fulfilled here. Thus, we get the existence of a unique solution for the realization ω ∈ N c on [t 0 , J 1 (ω)] with TV [a e ,b e ] (ρ e (·, J 1 (ω), ω)) < ∞. Then, the assumptions on the total variation are again satisfied and X (J 1 (ω), ω) can be used as a new initial value for a network with capacities µ e (J 1 (ω), ω). A unique solution for this realization can be obtained by applying this procedure iteratively up to [J K(ω) (ω), T ]. Since the process has P -a.s. a finite amount of jumps, we can conclude the existence of a unique solution in the sense of definition 2.5.

Remark 1. The bound from [3] on the total variation of a deterministic network solution reads in our context as follows
Regarding the proof of theorem 2.6, the total variation of the network solution can increase through the jumps in the capacity processes and a dependence of the total variation on this number of jumps is indispensable. For a sample path ω ∈ N c we postulate the following bound on the total variation where we set C e max = max C e and N e (t, ω) as the number of jumps of (µ e (t, ω), t ∈ [0, T ]) until time t.
3. Numerical treatment. As proposed in [17], we use the fact that a PDE-ODE system can be solved deterministically between the random switching times in the sense of a piecewise-deterministic process [9]. Therefore, we have to face two issues to derive a suitable numerical approximation of the new stochastic network model, in particular the generation of semi-Markov capacities and the approximation of the deterministic PDE-ODE system between the random times. The discretization of the conservation law (12) is done using a standard left-sided Upwind method while the ordinary differential equation for the queue is discretized by an explicit Euler method. However, the reformulation of the simulation algorithm [17] for the capacity process to a semi-Markov process is not straightforward and will be discussed in detail.
3.1. Generation of a semi-Markov process. The general procedure to simulate a semi-Markov process is based on the Markov property of the underlying (X, J)process. This means, information on the states X n and J n is sufficient to compute the states X n+1 and J n+1 . Once we are able to determine the transition of the (X, J)-process, we can find a corresponding path of the semi-Markov process as the step function given by the (X, H)-path. By defining we get a stochastic matrix which generates the time-discrete Markov chain (X n , n ∈ N 0 ). We observe the following relation as in [27] by using the definition of conditional probabilities Using this notation, the distribution function of the holding time reads The property m k=1 P lk = 1 ensures to use the composition method to simulate the holding times, see e.g. [22].

SIMONE GÖTTLICH AND STEPHAN KNAPP
Altogether, we end up with Algorithm 1 for the transition of a (X, J)-process given by P , F and the values X n , J n . Therein, the routine RandGenP(X n ) is a random number generator to the probability vector (P l1 , . . . , P lm ) and RandGen F(l, i) generates a random number with cumulative distribution function F li .
Algorithm 1: Transition step for an (X, J)-process Data: To evaluate the performance of the stochastic network and its numerical behavior, performance measures need to be defined. For our purposes, these are the mean queue-load, the mean outflow, the mean production time and eventually relevant higher moments of the random variables. Accordingly to [17] the outflow of the network is given by and additionally we define as the network inflow, the sum of all queue-loads and the production time of all parts which have entered the production up to time T . Since the measures above are random variables with unknown distributions, some estimators (e.g. moments) are required. We denote as usual by the mean value estimator and by the variance estimator for the family of independently, identically distributed (iid) random variables X 1 , . . . , X n . The strong convergence of these estimators follows directly from the law of large numbers, i.e. G net out (t), q net (t) are finite for finite time horizons and τ prod (T ) is P -a.s. finite for reasonable parameters or distributions, respectively. For given, independent generated realizations x 1 , . . . , x n of the random variable X, the estimated mean value is given by x i and the estimated variance of the sample by To ease the notation, we denote them also by X and σ 2 (X), respectively.
4. Relation to queueing theory. In this section, we analyze under which conditions the macroscopic model described in section 2 is related to a classical queueing system and how the latter can be used to compute the expected queue-load and the expected work in progress (WIP) in the steady state of the macroscopic model. We restrict our analysis to a production network consisting of a single queue-processor unit and the comparison with an (M, G, 1) queue, i.e., arrivals are Markovian M , service times have a General distribution G and a single server 1. The processor is assumed to have a two-state capacity process (µ(t), t ≥ 0) with values in {0, c} for some fixed capacity c > 0 such that it is on or off, respectively. Let λ > 0 be the constant arrival rate for the Markovian arrivals in the queueing system. This means the inter-arrival times (H 1 n , n ∈ N) are exponentially distributed with mean λ −1 such that the arrival process can be seen as a Poisson process with parameter λ. To transfer the arrival process into the macroscopic framework, we set If t ∈ [J k , J k+1 ), we get the amount of arrivals until time t as which equals the value of the Poisson process at the times t = J 1 k . The major difference between queueing theory and the macroscopic model arises from the different interpretation of the service time S which corresponds to the production time in this context. In an (M, G, 1) queue the production time can be arbitrarily distributed assuming S ≥ 0 and is defined as the time needed to complete exactly one good. If we define the production time in the macroscopic model as the time needed to produce one amount of goods, a processor with large length l would have a large production time. In this case, the processor is able to absorb parts from the queue although there is no outflow up to now. This leads to a totally different size of the queue in the macroscopic model compared to the queueing system. For this reason, we define the production time S in the macroscopic model as the time needed to reduce the queue by one amount of goods assuming no inflow during this time. Let a filled queue with q(0) = 1 without any inflow be given, then the production time S is the first time such that we can reformulate the problem to which has no solution in the usual sense since the function Y is increasing but not strictly. Hence, to define a solution to equation (17), we define the pseudo inverse like for cumulative distribution functions and obtain the production time  If we subtract the identity from the inverse, we obtain a piecewise-constant jump process, where the holding times are given by the on-times (τ on k , k ∈ N) and off-times (τ of f k , k ∈ N) characterize the jump sizes. Therefore, the inverse can be written as τ on l for every k ∈ N 0 . Now, we can use well-known results from queueing theory, see e.g. [20], to get the average number of parts in the queue L q in steady state. From the literature, we know that under the condition < 1. The first two moments of the random variable S are If we now assume that (τ on n , n ∈ N) are iid exponentially distributed random variables, the stochastic process (N x , x ≥ 0) is a Poisson process and N x is Poisson distributed with parameter x E[τ on 1 ] . This leads to the more condensed formulas , E[τ on 1 ] and together with equations (18)- (19) we are able to calculate the expected number of parts in the queue in steady state.
Next, we are interested in the computation of the expected number of parts being in the system that is given by the sum of the expected number of parts in the queue L q and currently in progress , i.e.,L = L q + .
Equation (20) is truly valid for a queueing system but might fail for the determination of an expected work in progress (WIP) in the macroscopic model. We define the WIP as the number of parts in the processor, i.e., Similar to the calculations above, we need a reformulation of the WIP in the macroscopic model in terms of the random variable S. Therefore, we derive conditions on the length and velocity of a processor such that equation (20) is satisfied and furthermore deduce an equation which is valid for arbitrary parameters. We use again the interpretation of the production time which is defined in the queueing system as the time needed to produce one part. In other words, we need to calibrate a processor such that the production time S of the macroscopic model equals the time to produce one amount of goods. Let the processor be characterized by its lengthl, its processing velocityṽ and the capacity process µ(t) from above. Due to the definition of the queue in the macroscopic model, an initial datumṽρ 0 ≤ c implies the reduction of the conservation law to an advection equation with linear flux f (t, ρ) =ṽr(t)ρ. Let x be the characteristic curve of the advection equation starting in the origin, then we get for every t ≥ 0. Then, the timeS to empty the processor fulfills The timeS should match with the production time S such that a comparison of equations (17) and (21) implies the relatioñ If we fix the processing velocityṽ to v, the length is Then, equation (20) holds true and the expected WIP for this processor is given by In steady state, the expected density is constant and not affected by the length of the processor. Thus, forρ being the expected steady state density of a processor described by the capacity process µ(t) and the processing velocity v, we have =lρ = v cρ and for a processor with general length l we havê = lρ = lc v .
Summarizing, applying known results from queueing theory, the expected amount of goods in the macroscopic queue-processor unit is Remark 2.
1. For a single processor in the macroscopic case with stochastic inflow and on-off capacities, we are able to calculate the expected queue size and WIP in steady state with equations (18) and (22). The assumption of exponentially distributed on-times is not necessary but allows to state an explicit expression of the moments for the stochastic process N x . 2. The condition < 1 in (19) can be used to analyze whether the macroscopic production model achieves a steady state or increases infinitely in the sense of expectations.
Example. We illustrate the results obtained from (18) and (22) numerically for one example. To do so, we choose a queue-processor unit with length l = 0.5, processing velocity v = 1.5, and maximal capacity c = 2. The capacity process is described by exponentially distributed on-and off-times such that E[τ on 1 ] = 0.9 and E[τ of f 1 ] = 0.1. This yields a mean availability of 0.9 and we can randomly choose the initial state for the capacity process according to a Bernoulli distribution with p = 0.9 to accelerate the convergence to the expected steady states. We run our simulations using 10 4 Monte-Carlo iterations, time step size ∆t = 1/400 and spatial discretization such that the CFL condition ∆t ≤ ∆x/v is satisfied with equality. Figures 3 -6 show the sample mean queue-loadq and total amount of goodsq + WIP of the macroscopic system with different arrival rates λ. The dashed horizontal line represents the value L q and L calculated by equations (18) and (22). We observe a good approximation of the steady state values in all cases and see a relatively fast convergence towards them independent of the different arrival rates. capacities. In the second case we assume that the capacity can be randomly reduced but not to zero resulting in three possible capacities. Generally, these two capacity processes (µ Case 1 (t), t ≥ 0) and (µ Case 2 (t), t ≥ 0) are diverse and will produce different results. However, to compare both, we choose the transition rates such that the expected stationary capacities are equal. To do so, we use well-known results from the theory of CTMCs, see e.g. [26], and exponentially distributed holding times. The transition rates of the two CTMCs (µ Case 1 (t), t ≥ 0), (µ Case 2 (t), t ≥ 0) are illustrated in figure 7 with a resulting expected stationary capacity of 0.9 in both cases. In detail, the stationary probabilities for the first case are P (µ Case 1 (∞) = 1) = 9 10 and P (µ Case 1 (∞) = 0) = 1 10 and in the second case P (µ Case 2 (∞) = 1) = 9 11 , P (µ Case 2 (∞) = 0.5) = 9 55 and P (µ Case 2 (∞) = 0) = 1 55 .
By a suitable choice of the parameters such that the expected stationary capacity equals, we expect the same stationary density and queue-load for both cases. In fact, for a constant inflow of G in (t) = 0.8 which is below the expected stationary capacity, we observe this behavior in figures 8, 9 and 10 as a result of a simulation with 10 4 Monte-Carlo iterations. In figure 8 we show the difference of the first and  Figure 7. Graph representation of the CTMCs second case in terms of sample mean and variance. A fast convergence behavior for increasing time t can be observed. This behavior can be also seen for the sample mean and variance of the queue-loads in figure 9. We suppose that the steady state of the first two moments of the density and the queue-load only depend on the mean capacity in these scenarios.
Due to the fact that G net out (t) = min{vρ(b, t), µ(t)} is directly affected by the capacity processes with different variances, there occur different sample variances for the outflow as figure 10 illustrates. If we focus on the local behavior in time, we conclude from figure 8 a significant difference between the both cases at the boundaries of the two densities. This motivates the analysis of a non-stationary situation, i.e., a time-dependent inflow. We choose G in (t) = 0.9| cos(t)| and observe an impact on the sample means and variances as figures 11, 12 and 13 indicate. From the periodic differences in figure 11, we conclude different reaction times of the model for case one and two that lead to different production times.  Figure 11. Difference of the sample mean and variance densities This motivates the analysis of the production times in both cases, where we set the inflow to G in (t) = 1 [0,1] (t). We might expect a lower production time in the second case arising from the stationary producing probability of 54/55 ≈ 0.98 that is higher than 0.9 from the first case. Table 1 shows the sample mean and variance of the production times. As we can see, they are very close for both cases which is an unexpected result.
Additionally, by comparing the histograms of the production times in figure 14 as an approximation of their distribution, we detect no severe differences between the two cases. One reason for these similar distributed production times may be effects which cancel out, i.e., the stationary probability to have a capacity lower 2.2357 2.2197 σ 2 (τ prod (∞)) 0.0529 0.0466 Table 1. Sample mean and variance of the production time than one is in case one (1/10) and hence smaller than case two (10/50) but the probability to produce nothing is in the first case higher (1/10) than in the second one (1/55).

5.2.
Impact of holding times. Before we analyze the influence of the holding time distribution on the performance measures, we comment on the impact of the holding time distribution. We know that the parameter of the exponential distribution is determined by the mean, i.e., it holds E[X] = λ −1 for X ∼ Exp(λ), λ > 0 and for the variance Var(X) = λ −2 = E[X] 2 . If we think about the modeling of an event occurring every 30 days in mean and having a standard deviation of two days, the exponential ansatz would always imply Figure 14. Histogram of production times a standard deviation of 30 days and hence fails. There is a need for alternative distributions with positive support such as e.g. the gamma or Weibull distribution. Here, we focus on the gamma distribution and a comparison with the exponential distribution. The latter is in fact a special case of the gamma distribution. This allows us to vary the parameters near the exponential distribution to investigate the influence on the overall performance of the production network. We follow the convention of MATLAB 1 for the gamma distribution, this means that Given the expected value and the variance, the parameters are determined by Var(X) and d 2 = Var(X) Our plan for the analysis of the influence caused by the holding time distribution is keeping the expected value constant and varying the variance of the gamma distribution. An illustration of probability density functions (pdf's) for the gamma distribution is given in figure 15. An increase of the variance implies an increasing   Table 2. Parameters of the network model with five processors This information is sufficient for the exponential case whereas further assumptions are needed for the gamma distributed one. Therefore we set for every α > 0 the Variance of the Time Between Failures (VTBF) and the Variance of the Repair Time (VRT) as The case α = 1 corresponds to the exponential distribution such that we can examine the behavior of the performance measures by varying the variance, i.e., varying α.
The parameters for the gamma distribution are for the times between failures and d RT 1 (α) = 1 α and d RT 2 (α) = α · MRT in the case of repair times.
The sampled mean of the density in the exponential case is shown in figure 17, where we use 10 4 iterations for the Monte-Carlo simulation. If we compare this result to the choice of α = 4 and α = 1 4 in figure 18, we observe a smoother shape for higher variance and vice versa.
As one may expect, the queue-loads and its variance are positive correlated to the variance of the holding time distribution, see table 3. Analogously, the sample    Table 3. Sample mean and variance of the network queue-load Next, we investigate a range of configurations for α to capture one fourth up to four times the exponential variance α ∈ {2 x : x = −2, −1.5, . . . , 1.5, 2} and compare the performance measures. To see the sensitivity of the model with respect to the parameter α, we plot each result of the simulation in terms of log 2 (α). If log 2 (α) = 0 holds, we are in the exponential case again. The number of Monte-Carlo iteration is 10 4 and we assume an empty system at the beginning. Figure 20 shows the estimated mean and the estimated variance of the first capacity process (µ 1 (t), t ∈ [0, 4]) for different α. The computed convergence of both quantities to a value which is independent of α is very conspicuous. We suppose that the one dimensional distribution, i.e. the distribution of the random  The sampled mean network outflow in figure 21 shows the negative influence of a higher variance although the mean of the distribution is fixed. Together with the positive dependence of the expected queue-load and the variance this implies a relevant connection between the performance of the production network and the chosen probability distribution.
We are also interested in how the variance of the queue-loads and the network outflow behave if we double or half the variance of the intermediate times. In the log 2 -log 2 diagram of figure 22 we observe a linear relationship with a slope of approximately one, i.e., doubling the variance implies a doubled variance of the outflow and the net queue-load. Indeed, we cannot observe this behavior if only one capacity process is changed in a more complex network. In particular, this processor would then imply the slope of the linear relationship. The production time does not follow the linear relationship as figure 23 shows. In fact, the dependence is even in the log 2 -log 2 scaling exponential and therefore very sensitive with respect to the distribution. Summarizing, we can conclude that the choice of the probability distribution for the holding-times strongly affects the performance measures of the production system. Sample mean of G out net (4) log 2 (α) Sample mean of qnet(4) Figure 21. Comparison of the sample mean of the network outflow and the network queue-loads log 2 (α) Sample variance of qnet(4) Figure 22. Comparison of the sample variance of the network outflow and the network queue-loads log 2 (α) For a better presentation of the results, we have restricted the modeling to serial networks. However, an extension to general networks is straightforward. Future work includes on the one hand the extension of the stochastic model to load dependent capacities and on the other hand the construction of the transition Matrix Q, for example the analysis of the entries in Q if the capacity process (µ(t), t ≥ 0) can be represented as the sum of independent capacity processes.