Piecewise smooth systems near a co-dimension 2 discontinuity manifold: can one say what should happen?

We consider a piecewise smooth system in the neighborhood of a co-dimension 2 discontinuity manifold $\Sigma$. Within the class of Filippov solutions, if $\Sigma$ is attractive, one should expect solution trajectories to slide on $\Sigma$. It is well known, however, that the classical Filippov convexification methodology is ambiguous on $\Sigma$. The situation is further complicated by the possibility that, regardless of how sliding on $\Sigma$ is taking place, during sliding motion a trajectory encounters so-called generic first order exit points, where $\Sigma$ ceases to be attractive. In this work, we attempt to understand what behavior one should expect of a solution trajectory near $\Sigma$ when $\Sigma$ is attractive, what to expect when $\Sigma$ ceases to be attractive (at least, at generic exit points), and finally we also contrast and compare the behavior of some regularizations proposed in the literature. Through analysis and experiments we will confirm some known facts, and provide some important insight: (i) when $\Sigma$ is attractive, a solution trajectory indeed does remain near $\Sigma$, viz. sliding on $\Sigma$ is an appropriate idealization (of course, in general, one cannot predict which sliding vector field should be selected); (ii) when $\Sigma$ loses attractivity (at first order exit conditions), a typical solution trajectory leaves a neighborhood of $\Sigma$; (iii) there is no obvious way to regularize the system so that the regularized trajectory will remain near $\Sigma$ as long as $\Sigma$ is attractive, and so that it will be leaving (a neighborhood of) $\Sigma$ when $\Sigma$ looses attractivity. We reach the above conclusions by considering exclusively the given piecewise smooth system, without superimposing any assumption on what kind of dynamics near $\Sigma$ (or sliding motion on $\Sigma$) should have been taking place.

1. Introduction. Consider the following piecewise smooth (PWS) system: for t ∈ [0, T ], and where, for i = 1, 2, 3, 4, R i ⊆ R n are open, disjoint and connected sets, and R n = i R i . System (1) is subject to initial condition x(0) = x 0 , prescribed in one of the regions R i 's. In (1), for any i = 1, 2, 3, 4, each f i is smooth in R i , so that there is a classical solution in each region R i , but the solution is not properly defined on the boundaries of these regions. We assume that these regions 1040 LUCA DIECI AND CINZIA ELIA are separated (locally) by an implicitely defined smooth manifold Σ of co-dimension 2. That is, we have and for all x ∈ Σ: h(x) = h 1 (x) h 2 (x) , ∇h j (x) = 0, h j ∈ C k , k ≥ 2, j = 1, 2, and ∇h 1 (x), ∇h 2 (x), are linearly independent on (and in a neighborhood of) Σ. It is useful to think of Σ = Σ 1 ∩ Σ 2 , where Σ 1 = { x : h 1 (x) = 0}, and Σ 2 = { x : h 2 (x) = 0}, are co-dimension 1 manifolds. Finally, without loss of generality we will henceforth use the following labeling of the four regions R i , i = 1, 2, 3, 4: For later use, we will also adopt the notation Σ ± 1,2 to denote the set of points x ∈ Σ 1 or Σ 2 , for which we also have h 2 (x) ≷ 0 or h 1 (x) ≷ 0. E.g., Σ + 1 = {x ∈ Σ 1 and h 2 (x) > 0}. Finally, we will denote with the projections of the vector fields in the normal directions to the manifolds. For (1), a classical solution in general cannot exist on the boundaries of the given regions, and several concepts of generalized solution have been proposed during the years (see [4] for a beautiful exposition on different solutions concepts). We will restrict attention to Filippov solutions, [15], consisting of absolutely continuous functions whose derivative is in the convex hull of the neighboring vector fields almost everywhere.
1.1. Co-dimension 1. In the case of one single discontinuity manifold of codimension 1, Filippov methodology has provided a widely accepted mathematical framework to understand motion on the discontinuity surface.
Consider a discontinuity manifold Σ = {x ∈ R n : h(x) = 0 , h : R n → R }, separating two regions R 1 (where h(x) < 0) and R 2 (where h(x) > 0), with respective vector fields f 1 and f 2 . Assuming that Σ is attractive, a condition that is satisfied when ∇h T f 1 > 0 and ∇h T f 2 < 0 , x ∈ Σ , then sliding motion on Σ takes place with vector field Filippov theory provides also first order exit conditions: if one of f 1 or f 2 (but not both) become tangent to Σ, then α = 0 or 1, Σ loses attractivity, and the solution trajectory will leave Σ tangentially (and smoothly) to enter in R 1 with vector field f 1 , or in R 2 with vector field f 2 . Furthermore, it has been understood for a long time that the limiting behavior of the iterates obtained with Euler method near Σ leads to the selection of the Filippov sliding vector field itself (e.g., see [26], Chapter 3, Section 1.1) whenever Σ is attractive, and that the Euler iterates leave a neighborhood of Σ when Σ loses attractivity.
1.2. Co-dimension 2. However, when Σ is of the form (2), an obvious lack of uniqueness (in general) arises in the construction of a Filippov vector field sliding on Σ. In fact, on Σ, Filippov methodology now leads to the requirement that the vector field satisfies the following (for positive values of λ 1 , . . . , λ 4 ): which is clearly an underdetermined system of equations. Indeed, even when Σ is attractive, in general (6) has a one parameter family of solutions and hence of possible Filippov sliding vector fields.
Remarks 1.1. (i) Characterization of attractivity of Σ in the present co-dimension 2 case is considerably more elaborate than in the case of co-dimension 1.
We will assume that Σ is either attractive by subsliding (see [10] ) or attractive by spiralling (see [7]), in the form recalled below in Definition 1.2. (ii) In the present case, the general lack of uniqueness is not resolved by considering the limiting behavior of the Euler iterates near Σ. In fact, as already noted in [15], the limit of the Euler iterates selects one specific element of the one-parameter family of Filippov solutions.
Definition 1.2. Σ (or a portion of it) is attractive upon sliding if it is reached in finite time by solution trajectories for any given nearby initial condition, and further there is sliding motion towards Σ along at least one of the Σ + 1,2 . When there is sliding motion towards Σ along all of the Σ ± 1,2 's then we say that Σ is nodally attractive. Σ (or portion of it) is attractive by spiralling if Σ is reached in finite time by trajectories for any given nearby initial condition, and there is clockwise or counter clockwise motion around Σ (and no sliding on Σ ± 1,2 ) for the functions h 1 (x) and h 2 (x).
When Σ is attractive, in the literature there have been at least two systematic proposals to select the coefficients λ i 's in (6), leading to sliding vector fields of Flilippov type on Σ: the bilinear and the moments vector fields. The former has been extensively studied, see [1,2,10,17] for example, and it consists in choosing the sliding vector field in the form The moments method has been recently introduced in [9] and consists in solving the linear system in (6), by appending to it the extra relation The two choices above generally lead to distinct solution trajectories, a chief difference between them being the behavior of the bilinear and moments trajectories at so-called generic tangential exit points. These were introduced in [10], and are values of x ∈ Σ, where one (and just one) of the sliding vector fields on Σ ± 1 or Σ ± 2 is itself tangent to Σ (exit vector field ). The moments vector field automatically aligns with the exit vector field, whereas the bilinear vector field does not always align with it. An important question, and one which we will try to answer in this work is: what should happen to trajectories of (1) in a neighborhood of Σ, when Σ loses attractivity (according to generic first order exit conditions)?
1.3. Regularize. If natura non facit saltum 1 , then (1) is either a description of an innatural system or a wrong model. Probably, whenever it is set forth, it is neither innatural nor wrong, though it may be a little of both things, perhaps because the model should be complemented by some missing information (the 20 years old exposition of Seidman in [23] is still worthwhile reading). Regardless, the above 18th century motto suggests considering a regularized version of (1), by replacing it with a smooth differential system. Of course, we must assume that we do not have knowledge of where (1) comes from, if from anywhere at all, otherwise we should surely use this knowledge. However, in the absence of further insight, it is not obvious how one should globally regularize the system, and several possibilities for globally regularizing the system have been proposed in the literature.
Arguably, the most studied regularization techniques are what we may call the "singular perturbation" and "sigmoid blending" techniques. The paper [24] was the first seminal work on the singular perturbation approach, followed by the more recent works [18], as well as [13], [16], [17] and, in the context of gene regulatory networks, [20]. The first systematic exposition of blending was the beautiful work [1]. But, in the end, these techniques are all rather similar and amount to a regularization of the bilinear vector field. This can be done locally, just in a neighborhood of Σ, say using a cutoff function (as we do in Section 2), or more globally, perhaps through use of hyperbolic tangent functions, tanh, to connect different vector fields.
In this work, we are exclusively concerned with local regularizations, hence those that alter the given problem only in a neighborhood of Σ. In this case, the above mentioned regularization proposals share some common traits, the most important ones being that, outside of a neighborhood of Σ, the regularized vector field effectively reduces to the original vector fields, and that the regularization depends on a small parameter (or several small parameters) in such a way that as the parameter(s) go to 0, the neigborhood collapses onto Σ. The first fact is surely a reasonable property, since, away from Σ, there is a well defined smooth vector field depending on where the trajectory is, be it one of the original f 1,2,3,4 or one of the Filippov sliding vector fields on the surfaces Σ 1,2 . The second fact may be a bit more controversial, since the regularized trajectory will typically select a specific sliding motion on Σ (when Σ is attractive), as the neighborhood collapses onto Σ; however, as it is well understood, and as we will also see, different regularizations do behave differently. As noted by Utkin ([26]), given that in a neighborhood of Σ there are non-unique dynamics, the inherited dynamics on Σ (sliding motion) does depend on the choice of regularization. But, this being the case, is there an appropriate way to evaluate different regularizations? To answer this question, we must first decide how we should evaluate dynamics.
1.4. Evaluate dynamics. The first observation is that when we evaluate the dynamics of (1) we should distinguish between the two cases (i) and (ii) below.
(i) The PWS system (1) is just a "convenient" formalism: there is a "true" smooth system, defined globally, but it is simpler to replace it by a PWS one. For example, this is the case for problems arising in gene regulatory networks ( [19], [5]). In these cases, one knows what is the desired behavior in a neighborhood of Σ: it is the behavior of the original problem! However, one must be careful in replacing the true system by (1), since it is not clear that the behavior of a typical solution of the purely PWS system reflects the dynamics of the original problem; this was already noted in works such as [21] and [23] relative to a discontinuity surface of co-dimension 1. But, then, does this discrepancy mean that representing the original problem as a PWS one was not an appropriate modeling simplification in the first place? Rather, the question is if and when the dynamics of the PWS system might still be helpful in understanding the behavior of solutions of the original problem. This paper is also a partial answer to this question. (ii) The problem is genuinely piecewise smooth (as a real life or as a mathematical model), or we do not have sufficient knowledge of an underlying "true" problem (if any); for example, in bang-bang control, or in dry friction models or in the theory of nonsmooth dynamical systems. In our opinion, in these cases when there is no (knowledge of an) underlying "true" dynamics that one is trying to recover, the choices we make must be consistent with the PWS formulation. This is the case in which we are interested.
The above consideration (ii) leads us to restrict to the model (1) as the system we are given and it is this system that we will study. This realization motivates us (and it has motivated us for several years) to look at the global properties of the discontinuity surface, and to decide on what is appropriate based on whether or not the surface attracts the dynamics of the PWS system for initial conditions off the discontinuity surface itself. In our opinion, it is not easy to justify studying (1) by assuming a specific form of an underlying system from where (1) arises as some form of limiting process. In other words, whereas it is surely legitimate (and by no means trivial) to study the limiting behavior of a specific choice of regularized vector field defined in a neighborhood of Σ, this may actually have a restricted scope of applicability compared to (1). As already noted by Utkin (see [25, p.44]), once one has replaced (1) with a smoothed version of it, the stability of sliding motion on Σ is inherited by that of the dynamics of the regularized field. But, is this consistent with the formulation (1)?
For the given PWS system (1), we believe that it is its dynamics near the discontinuity manifold that determines the appropriate behavior of a trajectory. This dynamics is what we will try to capture in this work. Moreover, our main interest is in the case when the discontinuity manifold transitions from being attractive to not attractive for the trajectories of the original discontinuous system: in this case, will (or should) a trajectory (even an ideal trajectory, sliding on the manifold) feel this loss of attractivity, and hence leave a neighborhood of Σ? Therefore, without assuming any form of idealized motion on Σ, we can reformulate our main task as follows: "how should we evaluate the dynamics of (1) in a neighborhood of Σ?" In principle, one may want to do this by studying the dynamics of a regularized problem, and we have already mentioned some possibilities, such as sigmoid blending and singular perturbation techniques. E.g., see [1,13,16,17,18,24]. We emphasize once more that these choices (as noted by Alexander and Seidman for blending, and by Teixeira et alia for singular parturbation) remove the ambiguity of how sliding on Σ occurs, but these choices are effectively modeling assumptions, and we should ask if they render a behavior of the dynamics on Σ that is consistent with that of (1).
Other possibilities have also been set forth in the literature, see [1] for further references.
(a) Euler broken line approximation. This is simple to do, and it consists in replacing (1) by a Euler method approximation with constant stepsize, call it τ . We have experimented extensively with this technique, see below.
[The eventuality, of probability 0, that an Euler iterate lands exactly on a discontinuity surface can be handled in different ways; e.g., by randomly selecting one of the neighboring vector fields, or by retaining the last vector field used.] (b) Hysteresis (or delay) approximation. This approach appears in [26] for a surface of co-dimension 1. For the case of Σ of co-dimension 2, it has been studied first in [2] and then in [8], always in the case of nodally attractive Σ. The idea here is that one has a region U around Σ (called a chatterbox in [2]), and uses the same vector field, say f 1 , not only in the region R 1 , but until the boundary of U in a different region is reached; at that point, a switch to the appropriate vector field in the new region is performed. The rationale for this approach is that one does not notice immediately that a discontinuity surface is reached, but there is a "delay" in appreciating this fact. (c) Replacing (1) with a stochastic DE of Ito type: dx = f dt + dW t . Again we note that there is zero probability of landing exactly on the discontinuity surface(s). The interesting feature of this approach is that it is bound to sample different vector fields around Σ. The disadvantage is that it makes quantitative predictions possible only in a statistical sense.
With the exception of (c), the other choices above effectively replace the original PWS system with another deterministic dynamical system, possibly discrete (as in the case of Euler method). But, unfortunately, these new systems have their own dynamics, and it is unclear whether or not these are consistent with that of (1). See Section 4 for ample illustration of this fact. At the same time, each of the cases above has some distinguished features, that we will attempt to retain.
1.5. Proposal and plan. Our proposal in support of what should happen to the dynamics of (1) in a neighborhood of Σ is to: "consider the Euler iterates with random steps," with steps uniformly distributed about a reference stepsize. In other words, we want to retain the simplicity of looking at Euler iterates, but aim at retaining a certain amount of randomness in the process to avoid getting trapped by the purely Euler dynamics. Furthermore, in this work we will restrict to well-scaled vector fields f i , i = 1, . . . , 4, with none of the w i j 's in (4) exceeding 1 in absolute value. The reason for this is to avoid the numerical trajectory going too far away from Σ, and at the same time to attempt retaining the flavor of a hysteretical trajectory.
We will complement our experiments made with the above strategy, by also using other approaches. For example, Euler method with constant stepsizes, singular perturbation regularizations, and also numerical integration of (1) performed with variable stepsize integrators.
We emphasize that our goal is to reach some conclusions insofar as to what is the behavior of a typical solution trajectory of (1) in a (small) neighborhood of Σ. We are not comparing methods, or different recipes of ideal sliding motion, but simply trying to evaluate the dynamics of (1) in the most plausible and honest way we can think of, without superimposing on (1) any extra modeling assumption.
Finally, one more caveat. Our examples are all of systems in R 3 and R 4 , and the discontinuity surfaces Σ 1 and Σ 2 are planes; this makes it easier to visualize and understand things. Higher dimensional state space, and non-planar discontinuity surfaces, can surely bring new phenomena into play, but we have no reason to suspect that the basic picture that emerges in our study, with the dichotomy between attractivity and lack of attractivity of Σ, will be modified substantially.
The remainder of this work is structured as follows. In Section 2, we consider a prototypical regularization of the bilinear vector field (7), and give sufficient (and sharp) conditions guaranteeing that the regularized solution converges to a sliding solution on Σ according to (7). In Section 3, we give some details of how we implemented the above mentioned proposal, particularly of what practical criteria we adopted to detect "exiting" from a neighborhood of Σ. In Section 4, we illustrate through several examples the different things that can happen, and their dependence on the adopted simulation choice.
2. Bilinear interpolant regularization: Solutions behavior and fast slow dynamics. Space regularizations are often employed in literature as an analytical mean to model the switching mechanism of a discontinuous system, see [2,13,16,17,18,24,25]. Typically, these regularizations are one parameter families of smooth vector fields with different time scales in a neighborhood of the sliding surface Σ, namely a slow dynamics tangent to Σ and a fast one normal to Σ. In what follows, we consider regularizations for Filippov discontinuous systems as in (1), but other approaches are available in the literature for non-smooth systems that are not of Filippov's type, for example control systems with nonlinear control, as in [25,26]. When the regularization parameter goes to zero, the regularized solution converges to a solution of Filippov's differential inclusion ([15, Theorem 1, §8]). It follows that, if Σ is a codimension 1 discontinuity surface, for any regularization that satisfies the assumptions of [15, Theorem 1, §8], the corresponding solution will converge to the unique sliding Filippov solution on Σ as the regularization parameter goes to zero. But if Σ has codimension k ≥ 2, then the ambiguity of Filippov's selection will reflect also in the amibiguity of the limit of regularized solutions (in other words, the limit will depend on the chosen regularization).
Our goal in this section is to study the limiting behavior of the solutions of a certain regularization, namely the bilinear interpolant, or simply bilinear, regularization (10) below. This regularization (or a close relative) has often been employed and studied in the literature; see [1,2,22,23,13,17,16,18]. More specifically, we will study when the solution of the bilinear regularization converges to the solution of a particular selection of the Filippov's vector field: the sliding bilinear vector field (7). When Σ is nodally attractive, this convergence has been shown in [1, Theorem 5.1], and -under the same assumption-in [13] it is shown that the bilinear regularized vector field converges to the bilinear sliding vector field. In what follows, we relax the hypothesis on attractivity of Σ, and simply assume that Σ is attractive in finite time, i.e. all trajectories with initial conditions in a neighborhood of Σ will reach Σ in finite time. This can be achieved either upon sliding along Σ 1 and/or Σ 2 (Σ is attractive upon sliding), or spiralling around Σ (Σ is spirally attractive); see Definition 1.2. For later reference, and under the stated attractivity assumptions of Σ, we note that the algebraic system (7) has a unique solution (α, β) in (0, 1) 2 .
For simplicity 2 , we assume that Σ is the intersection of the hyperplanes Σ 1 = {x ∈ R n |x 1 = 0} and Σ 2 = {x ∈ R n |x 2 = 0}. Then, we consider the -neighborhood S of Σ: S = {x 1 , x 2 : − ≤ x 1 , x 2 ≤ }, and two smooth (at least C 1 ) monotone functions (the functions of course depend on , but for notational simplicity this dependence on is omitted) α, β : R → [−1, 1] interpolating at ± , as follows: Then, we call bilinear regularization the following one parameter family of vector fields To be specific, in what follows, we have taken and α(x 1 ) = γ(x 1 ), β(x 2 ) = γ(x 2 ) but other choices of a C 1 monotone interpolant could be considered. Clearly, a choice of two different parameters for α and β, respectively α and β , could also be considered (e.g., see [22]), and this would be justified if, for example, it is known a priori that the trajectories of un underlying physical system approach the two surfaces Σ 1 and Σ 2 at different rates. Naturally, the choice of different parameters might lead to different qualitative behavior of the corresponding solutions, as we will see in Section 4.
Our goal is to study the behavior of solutions ofẋ = f B (x) for x ∈ S, and to see when, for → 0, they converge to the sliding solution on Σ with vector field f B as given in (7). In order to do so, we splitẋ = f B (x) into fast and slow motions. For (x 1 , x 2 ) in S, we can rewrite (10) with respect to the variables (α, β, x 3 , . . . , x n ). For the sake of more compact notation, we let y = (x 3 , . . . , x n ) and furtherẏ = g(α, β, y). In these variables, the full system rewrites aṡ where e i is the standard i-th unit vector in R n , i = 1, 2. Notice that dα dx1 and dβ dx2 depend on and are strictly positive (inside S); from (11), they are equal to We refer to α and β as the fast variables and to y as the slow variable. We denote the solution of (12) as (α (·), β (·), y (·)). Now, using (11) in (12), because of monotonicity of α and β, we can rewrite x 1 as a function of α and x 2 as a function of β. From (11), let z = x1 and rewrite the first of (11), for x 1 ∈ [− , ], as: √ α − α 2 and its third roots have modulus one, so that z = w , the corresponding z(α) can be rewritten as z(α) = 2 cos( ϕ(α) 3 + 4 3 π) and satisfies z(0) = −1 and z(1) = 1: this z(α) is the function we are looking for. Same reasoning applies for β. Then system (12) rewrites as with ϕ(γ) = Arg (1 − 2γ) + 2i γ − γ 2 . The function 1 − 4 cos 2 ϕ(γ) 3 + 4 3 π is strictly positive for γ ∈ (0, 1) and it is 0 at γ = 0, 1. Following standard approaches for singularly perturbed systems, we set = 0 in (13) and obtain the following system Notice that solutions of (14) are sliding solutions on Σ with bilinear vector field f B as in (7). Let (α * (y), β * (y)) denote the solution of (Recall that, under the assumptions of attractivity by sliding or by spiralling, (α * (y), β * (y)) is the unique solution of (15) in [0, 1] 2 ). Our goal in this section is twofold: to see if solutions of (12) converge to solutions of (14) as → 0 and to explore the behavior of solutions of (12) in the neighborhood of generic exit points.
2.1. Asymptotic behavior of the regularized problem for Σ attractive in finite time. From (12), we introduce the time variable τ = 3t 4 and consider the fast system where the "prime" denotes differentiation with respect to τ , and y is considered as a vector of parameters. Notice that the solution (α * (y), β * (y)) of (15) is an equilibrium of (16). We denote with (α(t, y), β(t, y)) the solution of (16) with initial condition (α 0 , β 0 ).

Remark 2.
In case we take different α and β in (11), we can write β = η α , perform the change of time variable τ = 3t 4 α , and obtain (similarly to (16)) the fast system The following result by Artstein is a stronger version of a classical result in singular perturbation theory ([3, Theorem 2.1]).
2 , and let τ = t 2 , then we would have obtained the system α = g 1 (α, β, y) instead of (16). Now, (18) is precisely the "dummy" system of [17] and [16]. However, (16) is not orbitally equivalent to (18). As we will see in Proposition 2, under appropriate conditions of attractivity of Σ, (α * (y), β * (y)) is an asymptotically stable equilibrium for both (16) and (18). However, condition (iii) in Theorem 2.1, implies that the limiting behavior of the solution of (12), depends also on the basin of attraction of (α * (y), β * (y)) and this, in general, is not the same for the two systems. As an illustration of this, see Example 4.2 in Section 4. System (16) is the system we need to study in order to understand the limiting behavior of the regularized solution in the case of the C 1 regularization (11). Moreover, we note that if we had used the C 0 regularization with different parameters α and β , namely 2 β , then we would have obtained a system not orbitally equivalent to (18).
In what follows, we first assume that Σ is attractive in finite time (upon sliding or spiraling) and we want to verify if and when (i), (ii) and (iii) are satisfied. This in turn will imply that solutions of the regularized problem converge to the sliding solution on Σ with vector field f B as given in (7). The following hold, both when Σ is attractive upon sliding or spirally.

Remark 4.
We emphasize that the assumption that Σ is attractive in finite time is sufficient but not necessary for the uniqueness of (α * (y), β * (y)); e.g., see [10]. As we will see in Section 4, this in turn will impact the behavior of the regularized solution, that might remain close to Σ even when Σ is not attractive.
In Proposition 1 and Proposition 2, we study the dynamics of (16) to verify asymptotic stability of (α * , β * ). Proposition 1. The phase space of (16) is the square Q = [0, 1] 2 . Moreover, the boundary of Q, denoted as ∂Q, is an invariant set for all values of y. The vertices of Q are equilibria. If any of the sliding vector fields f ± Σ1,2 is well defined (i.e., if there is sliding on any of Σ ± 1,2 ), the corresponding value of (α(ȳ), β(ȳ)) in (7) is an equilibrium of (16) for y =ȳ.
Proof. For α = 0, 1, or β = 0, 1, it must be x 1 = − , or x 2 = − , , and hence α = 0 or β = 0. Then, for all y, the boundary of Q is invariant under the flow of (16). It also follows that the vertices of Q are equilibria of (16). Notice that the corresponding values of g 1 and g 2 at the equilibria are (w 1 (7). In addition to the vertices of Q, other equilibria of (16) might belong to ∂Q, as follows.
Reference configuration under which Σ is attractive in finite time.
Proposition 2. Assume that Σ is attractive in finite time upon sliding along at least two of the Σ ± 1,2 , and that there is attractive sliding along these codimension 1 surfaces. Then (α * (y), β * (y)) is exponentially asymptotically stable for (16).
The assumptions in Propositions 2 exclude the case of Σ spirally attractive and Σ attractive upon sliding on just one of the Σ ± 1,2 . For each of these cases, examples can be given where the equilibrium (α * , β * ) of (16) is unstable even if Σ is attractive. See Example 4.3 in Section 4, where, with β = 10 α , the equilibrium (α * , β * ) of the fast system is unstable while Σ is attractive.

2.2.
Behavior of the regularized solution in the neighborhood of exit points. Here, we are interested in studying how solutions of (12) behave when Σ looses attractivity. Assume first that Σ is attractive upon sliding. We will focus on the case when Σ looses attractivity at so-called potential tangential exit points and at potential non-tangential exit points, defined next. Definition 2.2. Letx be a point on Σ. We say thatx is a potential tangential exit point if, atx, one of the vector fields f ± Σ1,2 is tangent to Σ (and hence it belongs to the Filippov convex combination). We say thatx is a potential non-tangential exit point if, atx, one of the vector fields f j (x) is tangent to either Σ 1 or Σ 2 , and it points away from Σ.
Simplification: Σ is a curve. For simplicity, below we restrict to the case n = 3, so that, in our case of Σ = {x 1 = x 2 = 0}, Σ is a (portion of the) x 3 -axis. (Of course, in general, when the co-dimension 2 discontinuity manifold Σ is immersed in R n , with n ≥ 3, we expect that exit points themselves will lie on a (union of disjoint) (n−3)-dimensional manifold(s) of codimension 3. With the understanding that exit points are now lying on these manifolds, our description below still qualitatively holds). Also, we consider only first order phenomena i.e., only potential tangential (respectively, non-tangential) exit pointsx, such that the derivative with respect to time (evaluated along the trajectory) of f ± Σ1,2 (x) (respectively, f j (x), j = 1, . . . , 4) is different from zero. As a consequence, for us exit points are isolated points on Σ.
In this case of n = 3, and insofar as the behavior of the regularized solution in the neighborhood of tangential and non-tangential exit points, the following phenomena can arise.
2.2.1. Tangential exit points. What occurs in this case depends on the number of equilibria of (16).
Suppose that the regularized solution enters the neighborhood of a tangential exit pointx = (0, 0,x 3 ), and without loss of generality let f Σ + 1 (x) be the vector field tangent to Σ, and let Σ be attractive for x 3 <x 3 . Then either (a) or (b) below can happen.
(a) (α * (x 3 ), β * (x 3 )) = (α + (x 3 ), 1) and the regularized solution exits the neighborhood of Σ to enter a neighborhood of Σ + 1 ; (b) (α * (x 3 ), β * (x 3 )) = (α + (x 3 ), 1) and these are the only solutions of (15) for y = x 3 =x 3 ; a first order analysis guarantees that the solution that atx 3 is equal to (α + (x 3 ), 1) is in the interior of [0, 1] 2 for x 3 in a right neighborhood ofx 3 . Nonetheless, (α * (x 3 (t)), β * (x 3 (t))) retains its asymptotic stability. It follows that the regularized solution remains close to Σ and for → 0 it will converge to the solution of the bilinear vector field on Σ, still well defined even though Σ is not locally attractive anymore. Eventually the regularized solution x(t) will leave a neighborhood of Σ if one of the two following phenomena takes place: (i) the equilibrium of (16) reaches a fold bifurcation value; (ii) (α * (x(t)), β * (x(t))) is on the boundary of Q for t =t, and this can happen only if one of the vector fields f ± Σ1,2 (x(t)) is tangent to Σ, say f + Σ2 . In case (ii), the regularized solution will leave the neighborhood of Σ to enter a neighborhood of Σ + 2 . But, in case (i) it is not generally possible to predict how the regularized solution will leave a neighborhood of Σ.
Remark 5. The phenomena (a) and (b) above do not depend on dα dx1 and dβ dx2 in (16), but only on the solution of the algebraic part in (14). Hence any choice of functions α and β in (10) that satisfies (9) will lead to the same phenomena.

2.2.2.
Non tangential exit points. Suppose that the regularized solution enters the neighborhood of a non-tangential exit point,x = (0, 0,x 3 ). In this case, although there is only one solution of (14), different things can happen depending on the functions dα dx1 and dβ dx2 in (16). Suppose that f 1 is the vector field verifying the conditions for non tangential exit. Then (α * (x 3 ), β * (x 3 )) is still the only equilibrium of (16) in the interior of [0, 1] 2 and it is asymptotically stable. However, the equilibrium (0, 0) undergoes a bifurcation at x 3 =x 3 and, for x 3 >x 3 , there is a neighborhood of (0, 0) in Q such that all solutions in that neighborhood are attracted to (0, 0). As far as the regularized solution, (i) or (ii) below may occur.
(i) The regularized solution x(t) might remain close to Σ, in agreement with Theorem 2.1. (ii) The regularized solution will leave a neighborhood of Σ to enter R 1 , if the corresponding solution of (16) enters the neighborhood of solutions that reach (0, 0). In other words, the behavior of the regularized solution depends on the possible bifurcations of (α * , β * ) and of (0, 0). Moreover, we can choose the functions α and β so that the terms dα dx1 and dβ dx2 will make either one of (i) or (ii) above take place. See Example 4.2 in Section 4 for an illustration of this fact.

2.2.3.
Exit point in the case of spiral dynamics. We consider the case of Σ spirally attractive and, based on the results in [7], we propose the following definition of potential spiral exit point. Definition 2.3. Letx ∈ Σ and assume that there is spiral dynamics around Σ. We say thatx is a potential spiral exit point ifx is such that Again, we are only concerned with first order phenomena, i.e., phenomena such that the derivative with respect to time of the quantity on the left hand side in Definition 2.3 evaluated at x(t) =x, is different from zero.
The equilibrium (α * , β * ) of (16) is always unique and well defined in (0, 1) 2 for all x 3 ∈ R as long as there is spiral dynamics around Σ. The attractivity of Σ though, does not imply the stability of the equilibrium. On the other hand, (α * , β * ) might be stable when Σ is not attractive. As an illustration of both phenomena, see Example 4.3. Furthermore, when the equilibrium of (16) is unstable and the trajectories of (16) reach (possibly in finite time) one of the equilibria on the boundary of Q, it is the local dynamics around Σ that determines the behavior of the regularized solution. The importance of the basin of attraction of (α * , β * ) is well illustrated in Example 4.3, Figure 11, for β = 10 α. Based upon the above described situation insofar as different behaviors of the regularized solution, it is natural to ask how should the solution of the original discontinuous system (1) behave once a potential first order exit point is reached, and how we should infer this. We discuss this aspect in the next section.
3. Numerical simulations for the discontinuous system. Frequently, Euler's method has been used as a mean to approximate the behavior of solutions of (1) in a neighborhood of the discontinuity surface. In [15] (proof of Theorem 1, page 77), Filippov showed that the solutions obtained with Euler's method converge to one of the solutions of Filippov's inclusion when the discretization stepsize goes to zero. In [1], Alexander and Seidman -in the case of a nodally attractive codimension 2 discontinuity surface Σ-consider a chattering trajectory x that evolves in an -neighborhood of Σ. The trajectory x is obtained by considering the Euler approximation of the solution, and the fact that Σ is nodally attractive guarantees that -for a sufficiently small stepsize τ -the Euler's approximations remain in anneighborhood of Σ. Alexander and Seidman in [1] show that every possible Filippov sliding vector field on Σ is realizable; i.e., given a solutionx(t) of Filippov's differential inclusion, there exists a chattering trajectory x that converges uniformly in t tox in a given time interval.
3.1. Discretization. We also use Euler's method as a tool to understand how the solutions of (1) should behave in a neighborhood of the codimension 2 discontinuity surface Σ. However, we propose to do this in a new way, which enables us to make (statistical) inferences on the expected behavior of a trajectory.
First, we evolve by Euler's method several initial conditions in a neighborhood of Σ. Second, in order to mimic the non ideal behavior of a physical system, at each step k we use a random stepsize τ k such that τ ≤ τ k ≤ 2τ , where τ is a fixed reference value; in practice we take τ k uniformly distributed (though, in our experience, changing the distribution does not produce meaningful changes in the outcome). In case one of the computed approximations falls on a discontinuity surface 3 , we take a random perturbation of size of machine precision, so that the perturbed value belongs to one of the regions R j 's. Therefore, for each initial condition x 0 , chosen in a τ -neighborhood of Σ, we generate the following approximate solution where δ k > 0 is a (normally distributed) random perturbation the same size of machine precision. We perform two types of experiments with the scheme (20). In the first experiment, we evolve many different initial conditions once, in the second experiment we perform many different realizations for the same initial condition. (a) We generate 100 (or more) uniformly distributed random initial conditions in a τ -neighborhood of Σ, and evolve each of them according to (20) on a given time interval of interest. We further monitor, see below, exit points for each trajectory, and perform statistics on these. (b) For a given uniformly distributed random initial condition in a τ -neighborhood of Σ, we generate 100 trajectories according to (20), on a given time interval of interest. Of course, because of the randomness in the stepsize selection process, these will give approximations at different times. Thus, we further interpolate linearly the given trajectories on a fixed temporal grid with spacing τ , that is at times 0, τ, 2τ, . . . . Finally, we compute an average trajectory by averaging the obtained 100 approximations on the fixed grid. We also compute exit points, etc., with respect to this average trajectory.

Construction of the experiments.
As previously remarked, our experiments are all in R 3 or R 4 , and with discontinuity surfaces given by h 1 (x) = x 1 and h 2 (x) = x 2 . Moreover, we choose the vector fields so that: • All but one of the w i j 's are constant and they are in absolute value less than 1. The non-constant w i j is a function of the slow variables x 3 (and x 4 ), and it is chosen in such a way that Σ changes from locally attractive in finite time to non attractive; • No vector field in the Filippov differential inclusion has equilibria (or more complicated invariant sets) on Σ.

3.3.
Exits. An important aspect of our simulations will be to monitor if an approximation {x k } k∈N of (1), computed as in (20), leaves a neighborhood of Σ when Σ loses attractivity. To perform this task, we reasoned as follows. (The choices below are appropriate for the vector fields we chose, see Section 3.2). (a) Exit on a codimension 1 surface [Tangential Exits]. Suppose that Σ loses attractivity at a tangential exit point for which f Σ + 1 becomes tangent to Σ. We monitor the function h 2 and declare that the numerical solution leaves a neighborhood of Σ by checking that h 2 (x k ) > 10τ . When this is the case, we define as "exit point" the valuex = xk+xk +1 2 , where the indexk is the first indexk for which h 2 (xk) > 1.5τ together with h 2 (x j ) > τ for all j ≥k. Note that we do this both for the 100 trajectories generated by (20) with different initial condition, as well as for the average trajectory. (b) Exit in one of the R j 's [Non Tangential Exits]. Suppose, for example, that the non-tangential exit is into the region R 1 . We observe that, at the nontangential exit point, f 1 is tangent either to Σ 1 or to Σ 2 , while pointing away from Σ. This implies the existence of repulsive siding along Σ 1 or Σ 2 with sliding vector field f 1 at the non-tangential exit point. Hence, we still need to detect first the exit on a codimension 1 sliding surface, which we do as in (a). (c) Exit from spiral case. As long as Σ is spirally attractive, motion around Σ now repeatedly takes the trajectory inside all of the regions R 1 , . . . , R 4 . To decide if the trajectory leaves a neighborhood of Σ (when the spiral motion ceases to be attractive), we monitor the 2-norm of the vector (h 1 , h 2 ) for the last computed numerical value in R 4 , before the solution enters the neighboring region R 2 (clockwise motions around Σ) or R 3 (counterclockwise motion). We do this for the trajectories associated to different initial conditions, and declare an exit when the 2-norm of (h 1 , h 2 ) becomes strictly monotone increasing.
3.4. Summary and limitations. As we report in the next section, based upon our experiments with Euler method with random stepsizes, we can thus summarize our findings.
• All computed solutions remain in a O(τ )-neighborhood of Σ as long as Σ is attractive. This confirms that, on an ideal system, sliding should be taking place along Σ. • All computed solutions move away from Σ when they reach a sufficiently small neighborhood of a potential exit point.
There are important caveats to our results.
• If the discontinuous dynamical system is the "idealization" of a known smooth system, the approach described above may be misleading if we want to understand the limiting behavior of the solutions of the original system. For example, if the original system is given by (10), as the regularization parameter goes to zero, one may obtain a limiting behavior which differs from the behavior of the discontinuous system; as an example of this, see [21]. Still, as long as the sliding surface Σ is attractive, one can predict that solutions of the bilinear regularized system must remain in a neighborhood of Σ. • A class of systems that does not fit our analysis is given by control systems with non linear controls for which the Filippov convex combination would not contain all the neighboring values of the vector fields, and hence our Euler's approximations of (1) are not sufficient to understand the behavior of solutions (see [26] for a general theory).

4) Unregularized integration.
This refers to the approximation of (1) computed (as a discontinuous system) with the Matlab built in functions ode45 and/or ode23s and/or ode15s. The most noteworthy aspects confirmed by our numerical experiments are the following. The numerical solution computed with "Random Euler" remains close to Σ as long as Σ is locally attractive, and instead leaves any small neighborhood of Σ when Σ looses attractivity. On the other hand, the approximations computed by the "Regularized integration" might remain in a neighborhood of Σ even when Σ is not attractive in agreement with the dynamics of (16). For the integration of the regularized problem, the stiff Matlab integrators often do not perform satisfactorily and their numerical solution remains in a neighborhood of Σ even if the corresponding theoretical solution is not; instead, we found the numerical integration with ode45 more reliable. Also, "Deterministic Euler" is not a foolproof option, since it superimposes its own dynamics to the true dynamics of the underlying problem. Finally, "Unregularized integration" may just fail when requiring stringent values of the error tolerances, and also occasionally producing totally erroneous approximations, and cannot be taken as a generally trustworthy indicator of what should happen in our context.
For our experiments, we will adopt the simplifications already anticipated in Section 3: the examples are in R 3 and R 4 , the discontinuity surfaces are Σ j = {x ∈ R n , x j = 0}, j = 1, 2 and the w i j 's are all constants except for one of them that depends on x ∈ Σ. Moreover, with the exception of Example 4.4, in all the examples below the Filippov vector field is uniquely defined. This is not a restriction for our scopes, since the behavior of solutions (whether or not for the regularized system) at potential exit points remains ambiguous, as we will see below.
All the numerical experiments are done on a 2014 Mac Book Air, 1.4 GHz Intel Core i5.
Σ is the (x 3 , x 4 ) plane and on it there is the unique vector fieldẋ = The circle γ = {(x 3 , x 4 ) : x 2 3 + x 2 4 = 2} is a curve of potential tangential exit points on Σ. The region inside γ is attractive upon sliding. It is attractive upon sliding along Σ ± 1,2 for x 2 3 +x 2 4 ≤ 2− 0.9 4 and it is attractive upon sliding along Σ ± 1 and Σ + 2 for 2− 0. 9 4 < x 2 3 +x 2 4 < 2, (see [11] for theoretical studies of an attractive co-dimension 2 surface Σ when one of the w i j 's is zero). Outside γ, f Σ − 1 points away from Σ so that Σ is not attractive. All solutions with initial condition inside γ will eventually meet γ. We expect a typical trajectory of (21) to leave Σ when it reaches γ and to start sliding along Σ − 1 in direction opposite to Σ. Below, we report on four experiments. a) Random Euler. We performed our experiments as described in Section 3.1, points (a) and (b). Following (a), we computed the average exit point for an ensamble of 100 initial conditions with x 1 (0) and x 2 (0) uniformly and independently distributed in [−τ, τ ] 2 and x 2 3 (0) + x 2 4 (0) = 1.7. We computed the exit point for every solution using the algorithm described in Section 3.3, point (a) . For τ = 10 −6 , we obtained that the average exit point satisfies 2.0865, with standard deviation σ 0.0202. We also computed the average trajectory as described in Section 3.1, (b) with reference stepsize τ = 10 −6 . In the left plot in Figure 2, we plot the first (continuous line) and second (dashed line) component of the average trajectory in function of (x 2 3 + x 2 4 ). As we can see from the plot, x 1 and x 2 remain close to 0 up to x 2 3 + x 2 4 2.08, suggesting that sliding motion is taking place along Σ. After that, x 2 starts decreasing while x 1 remains close to 0. This suggests that the computed average trajectory leaves Σ at x 2 3 + x 2 4 2.08 and starts sliding on Σ − 1 . All of this is in line with the behavior we expect for a typical trajectory of (21). In Figure 3 we plot 100 trajectories (shaded region) of (21) computed with Random Euler and uniformly distributed stepsize with τ = 10 −6 . The second component of each trajectory is plotted in function of (x 2 3 + x 2 4 ) while the bold line is the second component of the average trajectory. The average solution leaves Σ forx 2 3 +x 2 4 2.08 and all 100 computed solutions leave Σ before x 2 3 + x 2 4 = 2.125. Although the experiments described above are performed with uniformly distributed random stepsizes, similar results were obtained for normally distributed stepsizes. b) Regularized Integration. For the regularized vector field (10), we take α and β as in (11). The corresponding fast system (16) has a unique asymptotically stable equilibrium in (0, 1) 2 , a stable node, up to x 2 3 + x 2 4 =ρ 2.225. The curve x 2 3 + x 2 4 =ρ is a curve of saddle node bifurcation values for the fast system and for x 2 3 +x 2 4 >ρ there are no equilibria in (0, 1) 2 . (For this example, the behavior of (16) does not change by choosing different functions α and β in (11), as long as they satisfy conditions (9), this is in line with Remark 5). The behavior of the solution of the regularized system is well illustrated in the right plot in Figure 2. The solution of the regularized problem is computed with the Matlab function ode23s and RelTol=AbsTol=10 −9 . The continuous line in the plot is the first component of the solution, while the dashed line is the second component. They are both plotted in function of z = x 2 3 + x 2 4 . c) We also computed the solution of (21) with Deterministic Euler and fixed stepsize τ = 10 −3 , and initial condition (10 −4 , 10 −4 , 0, √ 1.7). First, we get rid of the transient and then plot the first two components of the approximation in Figure 4. None of the elements of the sequence generated by the forward Euler's map is on Σ 1 or Σ 2 and hence the computed solution is always in one of the R j 's and the selection of the vector field is straightforward. As it is clear from the plot, we do not recover the dynamics of the original system. The periodic orbit in the (x 1 , x 2 ) plane is spurious and is generated by forward Euler's own dynamics. The periodic orbit survives past the curve of potential tangential exit points and indeed the third and fourth component of the plotted solution are such that 1.5 ≤ x 2 3 + x 2 4 ≤ 30.817. Notice also that the computed solution is trapped in a neighborhood of Σ when Σ is not attractive . Periodic motion persists also for smaller values of τ , though the orbit shrinks to the origin as τ → 0.  Example 4.2 (Σ looses attractivity through a non tangential potential exit point).
The vector fields are and Σ is the x 3 -axis, with uniquely defined sliding vector fieldẋ 3 = 1. Σ is attractive through sliding along Σ + 1 and Σ + 2 for x 3 < 3. At x 3 = 3, f 1 is tangent to Σ 1 and points away from Σ so that the point (0, 0, 3) is a potential non tangential exit point. When x 3 > 3, the vector field f 1 points away both from Σ 1 and Σ 2 and we expect a solution of (1) to move away from Σ and to enter R 1 . a) Random Euler. We performed the experiments described in Section 3.1. Following Section 3.1, point (a), we considered 100 initial conditions uniformly and independently distributed in [−τ, τ ] 2 . With reference stepsize τ = 10 −4 , the average "exit point" computed as in Section 3.3, point (b) isx 3 2.9854 with standard deviation σ s 0.0122. With stepsize τ = 10 −5 , we obtained x 3 2.9923 and standard deviation σ s 0.0046. We also computed the average trajectory with reference stepsize τ = 10 −5 as described in Section 3.1, point (b). The first and second components of the solution are plotted in Figure 5 in function of the third component. As we can see from the plot, x 1 and x 2 stay close to 0 up to x 3 2.9944, then they both start decreasing suggesting exit from Σ into R 1 atx with third componentx 3 2.9944. The obtained results suggest that the typical solution of (22) behaves qualitatively as we expect. b) Regularized Integration. For the regularized vector field (10), we take α and β as in (11), but we make different choices for the parameter . We use two different parameter sets, namely α = β = 10 −6 and α = 10 −6 , β = 10 −5 . The corresponding fast systems (16) and (17) are not orbitally equivalent and, as a consequence of this, the behavior of the regularized solutions differs as well. We first consider the regularized system for α = β . In this case, (α * (x 3 ), β * (x 3 )) is an asymptotically stable equilibrium of (16) in (0, 1) 2 for x 3 <x 3 3.3853. For x 3 =x 3 , the eigenvalues of the Jacobian matrix of (16) at the equilibrium cross the imaginary axis and (α * (x 3 ), β * (x 3 )) undergoes a subcritical Hopf bifurcation. In Figure 6, on the left, we plot (with a star) the stable equilibrium of (16) for x 3 = 3.38 and the unstable periodic orbits for two different values of the parameter: x 3 = 3.38 and x 3 = 3.37. The orbit that corresponds to the parameter value x 3 = 3.36 is obtained integrating backward in time system (16) and, as we can see from the plot, it reaches the boundary of the square in finite time. The plot suggests that there is no unstable periodic orbit inside [0, 1] 2 for this parameter value. In Figure 6, we plot some trajectories (in forward time) of (16) for the parameter value x 3 = 3.2. The dots in the plot represent the equilibria of the system and the trajectories in the plot have initial conditions close to the equilibria. Following Theorem 2.1, if the initial condition of (12) is chosen so that the corresponding (α(x 3 (0)), β(x 3 (0))) is in the basin of attraction of the equilibrium (α * (x 3 (0)), β * (x 3 (0))) in (16), then b) in Theorem 2.1 follows, and the regularized solution remains in a neighborhood of Σ as long as (α * (x 3 ), β * (x 3 )) is attractive. If instead the initial condition is not in the basin of attraction of the equilibrium (α * (x 3 ), β * (x 3 )), the corresponding solution might not remain in a neighborhood of Σ (consider the case of Σ non attractive for example). This is well illustrated in Figure 7 where we depict the numerical solution of the regularized problem with initial condition [10 −6 , 10 −6 , 3.2] on the left .384 and then enters R 1 and this is in accordance with our singular perturbation analysis. In the right plot the solution reaches Σ upon sliding along Σ + 2 but then, rather than sliding on Σ, it crosses it and enters R 1 . Most likely the crossing is due to the fact that the initial condition [10 −3 , 10 −3 , 3.2] does not satisfy (iii) in Theorem 2.1.
If we study the fast dynamic using (18) instead of (16), then the equilibrium (α * , β * ) still undergoes a subcritical Hopf bifurcation but for a different parameter value, namely x 3 =x 3 3.4476. However, this is not in agreement with the behavior of the regularized solution that instead exits Σ before this parameter value, as can be observed in the left plot of Figure 7.
With the second set of parameter values, α = 10 −6 and β = 10 −5 , the dynamics of (17) is similar, but the bifurcation values are different. The equilibrium (α * (x 3 ), β * (x 3 )) of (17) is stable up to x 3 =x 3 3.2296 and at x 3 =x 3 it undergoes a subcritical Hopf bifurcation. For x 3 >x 3 all solutions with initial condition in [0, 1] 2 reach the boundary of the square in finite time.
We computed the numerical solution of (10) with α = 10 −6 and β = 10 −5 with ode45 and RelTol=AbsTol=10 −12 . In Figure 8 we plot the first and second component of the solution in function of x 3 . The plot suggests that the numerical solution leaves Σ to enter R 1 for x 3 3.238 and this is in agreement with the singular perturbation analysis. The average stepsize used by the integrator is O(10 −5 ). We also used stiff integrators on the regularized problem with same values as ode45 for RelTol and AbsTol, but they did not perform satisfactorily. Indeed, the numerical solution obtained with ode23s remains close to Σ up to x 3 3.369 (with an average stepsize O(10 −4 )), while the solution obtained with ode15s is even less accurate and exits at x 3 3.485 (with an average stepsize O(10 −2 )). We integrated the problem also with lower values of β , while still having α = β 10 so that the corresponding fast systems are all equivalent. The numerical approximations with the stiff integrators are even less reliable, and indeed the computed exit values increase as β decreases.
Σ is the x 3 -axis, with uniquely defined Filippov sliding motionẋ 3 = 1, and there is spiral like dynamics around Σ. For x 3 < 1, Σ is attractive in finite time while, for x 3 > 1, Σ is not locally attractive.
a) Random Euler. Approximations computed with "Random Euler" move away from Σ for x 3 > 1. For a statistical estimate of the exit point, we consider an ensamble of 100 initial conditions with the first two components uniformly (and independently) distributed in [−τ, τ ] 2 as described in Section 3.1, (a). For τ = 10 −5 , the mean value of the third component at the exit point is x 3 0.94777, with standard deviation 0.04297. For τ = 10 −6 , the mean value at the exit point isx 3 0.97910 and the standard deviation is 0.00529. We also computed the average trajectory using the algorithm in Section 3.1, point (b). In Figure 9 we show the average trajectory computed with τ = 10 −5 and initial condition (10 −6 , 10 −6 , 0.5).
In Figure 10, on the left, we plot one numerical solution of (23) obtained with Random Euler and average stepsize τ = 10 −6 . On the right of Figure 10 we plot the 2-norm of the vector (h 1 , h 2 ) (for us, this is the vector (x 1 , x 2 )) in function of x 3 . The full dot in the plot marks the estimated exit point. b) Regularized Integration. Next, consider integrating the regularized vector field (10) with α and β as in (11) and two different sets of parameters. We first consider α = β = 10 −4 . For these parameters values, the equilibrium (α * , β * ) of the fast system (16) is stable up to x 3 1.41798, when the eigenvalues of the Jacobian matrix at the equilibrium cross the imaginary axis. We compute the solution of the regularized system with initial condition (10 −4 , 10 −4 , 0) in the time interval [0, 2] (so that 0 ≤ x 3 ≤ 2) with ode23s, ode15s and ode45 , while the solution computed with ode45 moves away from Σ for x 3 1.5 (well after Σ has lost attractivity) as it is seen from the left plot of Figure 11. The average stepsize used by the stiff integrators is O(10 −2 ), while the one used by the explicit integrator is O(10 −4 ). We then consider a second set of parameters α = 10 −4 and β = 10 −3 . The equilibrium (α * , β * ) of the corresponding fast system (17) is unstable when x 3 > 0. On the right of Figure 11 we plot the approximation computed with ode23s (the approximation computed with ode45 behaves in a similar way). As we can see from the plot, as long as Σ is attractive, the regularized solution is forced to oscillate around Σ even if the equilibrium (α * (x 3 ), β * (x 3 )) is not stable. c) Unregularized integration. The numerical solution obtained with the stiff Matlab integrator ode23s, with RelTol= AbsTol = 10 −7 , stays close to Σ up to x 3 = 1 and then it leaves Σ to enter one of the R j 's. The integration in the   and Σ is the ( Figure 13), divides Σ in two regions. Outside γ, Σ is attractive upon sliding along Σ + 1 and Σ + 2 . The vector field f + Σ2 is tangent to Σ at all points of γ, it points away from Σ inside γ and it points towards Σ outside γ. Hence, Σ is not attractive from inside γ. We would expect a typical solution of (24) to leave Σ once it reaches γ. Note that for this example the Filippov sliding vector field on Σ is not uniquely defined.
(a) Random Euler. The ambiguity of a Filippov sliding vector field is clearly reflected in the numerical solutions obtained with Random Euler. In Figure 13, on the left we plot only the x 3 and x 4 components of 100 trajectories computed with reference stepsize τ = 10 −4 , and same initial condition (10 −4 , 10 −4 , 3, 0.5). On the right, we plot the x 1 , x 3 and x 4 components of only 20 of the trajectories above. Notice that Σ is attractive for (x 3 , x 4 ) = (3, 0.5) and at this point there is a family of Filippov vector fields on Σ, namely: f F (x 3 , x 4 ) = [0, 0, −x 3 + ( 337 14 − 103 2 λ)x 4 , x 4 ] T , with 13 35 ≤ λ ≤ 3 7 . The circle in both plots is the curve γ and the initial condition is marked with a dot. Looking at both plots we notice: i) The computed solutions leave Σ when they reach a point on γ (the x 1 component in the right plot increases from O(10 −4 ) to O(10 −2 ) when the trajectory meets γ coming from outside it); ii) When the trajectories are sufficiently far from Σ, random Euler selects the vector fields (presently, only f 3 and f 4 ) so to reproduce the behavior of a uniquely defined vector field. This compares well with what is expected in theory (a solution of (24), if it leaves Σ, starts sliding on Σ + 2 with uniquely defined vector field f Σ + 2 = 1 2 f 3 + 1 2 f 4 ). Moreover, looking at the zoom in Figure 14 we see that Random Euler does not reproduce the behavior of a unique vector field when the solutions are sufficiently close to Σ (outside γ in the plot) as a witness to the ambiguity of a Filippov vector field on Σ. Random Euler reproduces this ambiguity satisfactorily. For completeness, in Figure 15     average solution is leaving Σ. Moreover x 1 starts decreasing towards 0 when (x 3 − 3) 2 + (x 4 − 3) 2 becomes again greater than 4 and this compares well with the theoretical results. b) Regularized Integration. Here, the computed approximations of the regularized system move away from Σ once they reach γ. We note that γ is a curve of saddle-node bifurcation values for (16) for any parameter value α and/or β . On γ, (α * , β * ) = (1, 0.5) is a double root of (15), while inside γ there is no solution of (15).

5.
Conclusions. In this work we have been interested in studying the behavior of solutions of piecewise smooth systems in the neighborhood of a co-dimension 2 discontinuity surface Σ, intersection of two co-dimension 1 discontinuity surfaces. It has long been accepted that if solution trajectories cannot leave Σ (Σ is attractive), some form of sliding motion on Σ should be taking place. Precisely which sliding motion has been the subject of much investigation, but it has not been our concern in this paper. Our chief interest in this work has been trying to understand what should happen when Σ loses attractivity (at generic first order exit points). To our knowledge, this type of study had not been carried out before. We took the point of view that the piecewise smooth system (1) was the only information at our disposal, and treated this (real life or mathematical) model with its own mathematical dignity. Naturally, if (1) arose as a simplified model for some other known differential system, then this original system should ultimately guide the search for appropriate dynamics near Σ and it may well be that the dynamics of this "true" system are not matched by those of (1). However, the discontinuous model might still give indications on the behavior of the original system. As an example, solutions of the bilinear regularized vector field remain in a neighborhood of Σ when Σ is attractive for the discontinuous system. To obtain information on the dynamics of (1), we used a simple Euler method with random steps, uniformly chosen with respect to a reference, small, stepsize. Our study unambiguously showed that: (i) when Σ is attractive, solution trajectories remain near Σ (thereby validating an idealized sliding motion on Σ); (ii) when Σ loses attractivity, a typical solution trajectory leaves a neighborhood of Σ.
Several other possibilities have also been considered in this work: regularization techniques, plain and simple Euler method with fixed stepsize, and direct numerical integration of (1) with sophisticated off-the-shelf solvers for differential equations. None of these options satisfactorily resolved the dynamics of (1), and often produced misleading behavior. Ultimately, this occurred because each of these choices either superimposed its own dynamics on those of (1) (as Euler method and regularization techniques do, further producing different behaviors depending on how the regularization is made), or just failed to produce reliable answers in too many cases (this was the case with directly solving (1) with existing software, where the outcome dramatically depended on the solver used, or on the tolerances values, or both).
The above notwithstanding, our conclusions are not fully satisfactory either. Our analysis tells us that it is the dynamics of (1) around Σ that must be used to tell us what should happen in a neighborhood of Σ, but we know of no general foolproof mean to regularize the system so that the regularized trajectory will be following the dynamics of (1). Perhaps, and -again-as long as the model (1) is appropriate, the most reliable and practically efficient way to proceed is to accept some form of idealized sliding motion on Σ as long as Σ is attractive, while also demanding that a sliding trajectory leaves Σ when the latter loses its attractivity. The construction of appropriate sliding vector fields fulfilling these requests remains an outstanding and challenging task.