EXISTENCE NONLINEAR FINITE DIMENSIONAL STOCHASTIC DIFFERENTIAL EQUATIONS OF SUBGRADIENT TYPE

. One proves via variational techniques the existence and uniqueness of a strong solution to the stochastic diﬀerential equation dX + ∂ϕ ( t,X ) dt (cid:51) N (cid:80) i =1 σ i ( X ) dβ i , X (0) = x, where ∂ϕ : R d → 2 R d is the subdiﬀerential of a convex function ϕ : R d → R and σ i ∈ L ( R d , R d ), 1 ≤ d < ∞ .

The hypothesis below will be assumed in what follows. (i) A(t, y) = ∂ y ϕ(t, y), ∀t ∈ [0, T ], y ∈ R d , where ϕ ∈ C([0, T ] × R d ) and, for each t ∈ [0, T ], y → ϕ(t, y) is a convex function on R d . Here ∂ y ϕ(t, y), simply denoted ∂ϕ(t, y) in the following, is the subgradient (subdifferential) of ϕ(t, ·) in y, that is, ∂ϕ(t, y) = {z ∈ R d ; ϕ(t, y) ≤ ϕ(t,ȳ) + z · (y −ȳ), ∀ȳ ∈ R d }. ( It should be said that the function y → A(t, y) is, in general, multivalued (this is the case if ϕ is not differentiable in y) and stochastic differential equations of the form (1) arise, for instance, in the case of discontinuous equations of the form

VIOREL BARBU
where A 0 : R d → R d is a monotone measurable function, that is, where B δ (x) = {y ∈ R d ; |x −ȳ| ≤ δ} and m is the Lebesgue measure. Then A : R d → 2 R d is maximal monotone (see [1], p. 46) and so it satisfies hypothesis (i). The solution to (1) with A given by (4) We shall use also the standard notation L p ((0, T )×Ω; R d ) for the space of R d -valued, L p -integrable functions on the set (0, T ) × Ω endowed with the measure dt × dP. Given a convex and lower-semicontinuous function ψ : R d → R =] − ∞, +∞], we denote by ψ * : R d → R the conjugate function We note that (see, e.g., [7], p. 79) We refer also to [10] for basic results on stochastic analysis to be used later on in this work.
where I is the unity d × d-matrix.
We have Theorem 2.2. Assume that hypothesis (i) holds. Then, for each x ∈ R d , there is a unique strong solution X to (1). Moreover, one has Example 1. Consider the second order stochastic differential equation or, more exactly, where β is a Brownian motion and Here f : [0, T ] × R → R is continuous in t and monotonically nondecreasing in the second argument with a discontinuity in r = r 0 . By filling the jump in r 0 , i.e., by replacing the function f by and substituting in (17) A 0 by the corresponding mapping A defined by (4), we rewrite the above system under the form (1), where A satisfies hypothesis (i). (By redefining the scalar product of R 1 × R 1 , we see that A is of gradient type.) Equation (15) describes the motion of a particle subject to a linear deterministic forcing ω 2 X and to a nonlinear, time-dependent discontinuous friction force f (t,Ẋ).
3. Proof of Theorem 2.2. Without no loss of generality, we may prove the theorem for the modified equation where λ > 0 is arbitrary. Indeed, equation (1) can be reduced to (18) by the substitution X → e λt X and by replacing A(t, X) by e −λt A(t, e λt X). We see that hypothesis (i) is invariant to such a substitution.
Consider now the transformation ) is the solution to (12). Then, by Itô's formula, we obtain for y = Γ −1 (X) the random differential equation Equivalently, where Since the mapping y → Γ −1 A(t, Γ(t)y)) is not monotone, the existence for (20) does not follow by general existence results for the deterministic Cauchy problem in R d . We have, however, Proof. We shall associate to (21) the following optimal control problem Minimize I(y, z) = E T 0 (ψ(t, Γ(t)y(t)) + ψ * (t, z(t))dt + 1 2 E|Γ(T )(y(T ))| 2 subject to (y, z) ∈ U.

Remark 1.
A nice feature of the variational approach used above is that it reduces the nonlinear stochastic differential equation (1) to a convex optimization problem which can be studied and, eventually, approximated in the framework of convex analisys. A similar method was used in the author works [3]- [6], [8] on the existence theory of nonlinear stochastic partial differential equations.
It should be mentioned also that there is a large number of works devoted to multivalued equations of the form (1), formulated as stochastic variational inequalities and a notable recent contribution is the recent work [9] of R. Buckdahn et al. The results of the present work do not cover entirely that of [9] which refers to stochastic variational inequalities on the nonconvex domain O ⊂ R d . However, the case where O is closed, convex and with nonempty interior can be treated as in Theorem 2.2 by redefining the function ϕ as ϕ(t, y) = ϕ(y) for y ∈ O, ϕ(t, y) = +∞ for y ∈ O. Since the new function ϕ has the domain D(ϕ(t)) with nonempty interior, one can proceed by reducing as above the existence problem to a convex minimization problem.
The proof is exactly the same and will be omitted.
It should be also said that the extension of Theorem 2.2 to the general mapping A : [0, T ] ×R d → R d , which are continuous in t and maximal monotone with respect to y, remains open.