THE INTERIOR GRADIENT ESTIMATE FOR SOME NONLINEAR CURVATURE EQUATIONS

. In this paper, we obtain the interior gradient estimate of some nonlinear equations which arise naturally from prescribed curvature problem of graphs in hyperbolic space. The method depends on the maximum principle.

This equation arises naturally from prescribed curvature problem of graphs in hyperbolic space, see for example [10,11] and the references therein. To study Weingarten hypersurfaces of prescibed curvature in hyperbolic space H n+1 , we seek a complete hypersurface Σ ⊂ H n+1 satisfying where f is a smooth symmetric function of n variables and κ[Σ] := (κ 1 , . . . , κ n ) denotes the induced hyperbolic principal curvatures of Σ. For convenience, we use the half-space model, .
Suppose Σ locally can be represented as the graph of some function u ∈ C 2 (Ω), u > 0, where Ω ⊂ R n+1 : taking ν be the upward unit normal vector field to Σ : The induced Euclidean metric and second fundamental form of Σ are given by and the inverse of g ij and its square root can be represented as The Euclidean principal curvatures κ 0 [Σ] are the eigenvalues of the symmetric ma- 2 , using the relation between the hyperbolic and Euclidean principal curvatures (see Section 2 in [10] or [11]) That is, the hyperbolic principal curvatures κ[Σ] are the eigenvalues of the matrix For convenience, we rewrite the above equation as uiu k u kl u l uj w 2 (w+1) 2 . The interior gradient estimate for high order prescribed curvature equation in Euclidean space has been obtained by Korevaar [12] for Weingarten equations, Li [15] and Trudinger [19] for general nonlinear elliptic equations of prescribed curvatures. Later, Wang gave a new proof in [20] using the maximum principle, where he constructed a celebrated auxiliary function. In this paper, we adapt the method used in [5] and [20] to obtain the interior gradient estimate for equation (1.1), but here more terms are involved and should be handled with care.
The related parabolic type interior gradient estimate for curvature equation has been obtained in [9,7] for mean curvature flow of graphs and [6] for the anisotropic mean curvature flow of a graph-like hypersurface. Later, the interior gradient estimate has been obtained in [1] for quasilinear parabolic equations having coefficients depending on the gradient, which include both isotropic and anisotropic flows, and [16] for some modified mean curvature flow (MMCF) in hyperbolic space.
We recall the notion of admissible solution.
Now we are ready to establish our main theorem as follows, where M := sup u, and C is a positive constant depending only on n, k and C 0 .
In particular, when k = 1, equation (1.5) is just the hyperbolic mean curvature equation: where a ij := δ ij − uiuj 1+|∇u| 2 . Then immediately we have the following corollary. Corollary 1.3. For any 0 < ε < 1, let u ∈ C 3 (B R (0)) be the solution of mean curvature equation (1.8) u, and C is a positive constant depending only on n, k and C 0 .
The assumption u ≥ ε is necessary in the sense that equation (1.4) is degenerate at u = 0. The same idea has been used in [11] and [16], in which they consider the corresponding approximate problem for a fixed sufficiently small ε > 0.
Remark 1.5. The gradient estimate was obtained in [18] for minimal surface equation (i.e. H ≡ 0 in (1.8)), which is key for them to obtain the existence of solution for Dirichlet boundary value problem in bounded planar convex domain. In the paper [16] (Section 9), they obtained the similar interior gradient estimate results for MMCF, which can be seen as the parabolic version of Corollary 1.3. For further interesting related results, please see papers [16,18] and references therein.
The rest of the paper is organized as follows. In section 2, we collect some basic properties of σ k which shall be used to prove the interior gradient estimate. In section 3, we adapt the idea used in [5] and [20] to prove Theorem 1.2.

Preliminary.
In this section, we collect some basic facts about k-th elementary symmetric functions which shall be used to prove Theorem 1.2.
We denote σ l (λ|i) be the symmetric function σ l (λ) with λ i = 0, and σ l (λ|ij) be the symmetric function σ l (λ) with λ i = λ j = 0. Also denote σ l (A|i) be the symmetric function with A deleting the i-row and i-column, and σ l (A|ij) be the symmetric function with A deleting the i, j-rows and i, j-columns, for all 1 ≤ i, j, l ≤ n.
Proof. See [14]. Then Proof. The first and second inequalities can be seen in [15]. And the last inequality follows from and Proof. Consider the auxiliary function: Then the function G(x) := log log u 1 (x) + log ϕ(u) + log g(x), x ∈ B R (0), (3.3) attains local maximum at the point x 0 ∈ B R (0). All the calculations are done at x 0 below. And in the following, we always assume that u 1 (x 0 ) ≥ 4, otherwise the proof is done.
So at x 0 , the first derivative of the auxiliary function has, As u j (x 0 ) = 0, ∀j ≥ 2, hence we have Taking the second derivative to the auxiliary function gives us Using equality (3.4), we obtain If u is admissible, so F is elliptic, and by substituting the above equality into (3.6), we have where the linearized operator coefficient (3.10) And we denote F := n i=1 F ii below.
First we deal with the term I, observe that (3.11) By using Proposition 2.1, where we have used (3.5) in the last equality above. Then (3.13) Next we handle the term II in (3.8), (3.14) Taking the first derivative to equation (1.5), we have By direct computation, we get Then at x = x 0 , Therefore, F ij a ij ) By adding 1 and 2 together, we obtain Assume that ϕ 2ϕ u 1 ≥ | g1 g |, otherwise the proof is done, since if ϕ 2ϕ we get By above assumption, we have Then If the last inequality above does not holds, that is (3.26) and the proof is complete.