First-order partial differential equations and consumer theory

In this paper, we show that the existence of a global solution of a standard first-order partial differential equation can be reduced to the extendability of the solution of the corresponding ordinary differential equation under the differentiable and locally Lipschitz environments. By using this result, we can produce many known existence theorems for partial differential equations. Moreover, we demonstrate that such a result can be applied to the integrability problem in consumer theory. This result holds even if the differentiability condition is dropped.

In this paper, we show that the existence problem of a global solution of the above equation can be reduced to the corresponding extension problem for a solution of some parametrized ordinary differential equation (ODE) (Theorems 1, 2). Using this result, we can easily reproduce the results of [6], [1], and [4] (Corollaries 1, 2, 3).
Further, these results have an application in economics, and that is called the integrability problem. Consider the following problem: subject to. x ≥ 0, p · x ≤ m, and let f (p, m) be the unique solution of the above problem. The integrability problem considers a method for calculating the information of v from f . In consumer theory, f is called the demand function, and represents the purchase behavior of consumer. On the other hand, v is called a utility function, and represents the preference of consumer. Thus, this problem concerns the problem of revealing the preference of consumer from their purchase behavior. We argue the relationship between the dual of the above problem and the standard partial differential equation (Theorem 3), and show that when there exists the global solution, the information of v can be recovered by f (Theorems 4, 5, Corollary 4). Many results in this paper are derived from the differentiability and local Lipschitz assumption of f . However, even if f is not differentiable, Theorems 1, 4, and 5 are applicable. We demonstrate such a case (Examples 1, 2).
In [3], Theorem 2 is shown under the continuous differentiability of f . However, in this paper, the continuous differentiability is weakened to the differentiability and local Lipschitz condition. Therefore, this theorem and its corollaries are new.
In section 2, we recall some basic results on ODEs used in this paper. In section 3, we present our basic equivalence theorem, and show several applications. In section 4, we discuss the application of this theorem to economics. All results in section 2 are proved in the appendix.
2. Preliminaries: Notations, and results for ODEs. Throughout this paper, we use the following notations. Let R n be the n-dimensional Euclidean space, and let x ∈ R n . Then, x i denotes i-th coordinate of x. Meanwhile, let f : P → R n be given. If x = f (p), then x i is also denoted by f i (p). Next, for any x, y ∈ R n , x ≥ y means x i ≥ y i for all i, and x y means x i > y i for all i. R n ++ denotes the set of all x ∈ R n such that x 0, and is called the positive orthant. Meanwhile, R n + denotes the set of all x ∈ R n such that x ≥ 0, and is called the nonnegative orthant.
We now mention several basic results on solutions of ODEs. Our setting for the ODEs is different from that in standard textbooks, and thus we attach the proofs of all results in the appendix. Note that similar results as those in this subsection have been derived in many famous textbooks. 1 First, consider the following ODE: whereẋ denotes the derivative of x with respect to t. A solution of this equation is a C 1 -class function x defined on an interval I containing t 0 such that x(t 0 ) = x 0 andẋ(t) = f (t, x(t)) for any t ∈ I. We assume that f : P → R n , where P ⊂ R × R n is open and (t 0 , x 0 ) ∈ P , and that f is continuous in (t, x) and locally Lipschitz in x: that is, for every compact set C ⊂ P , there exists L > 0 such that if (t, x 1 ), (t, x 2 ) ∈ C, then f (t, x 1 ) − f (t, x 2 ) ≤ L x 1 − x 2 . Then, the following fact holds.
Second, consider the following parametrized ODE: We assume that f :P → R n , whereP ⊂ R × R n × R m is open, and that f is continuous in (t, x, y) and locally Lipschitz in x. Then, the following fact holds.
FACT 4. There uniquely exists a function x(t; y) defined on some open set in R × R m such that if y is fixed, then x(t; y) is a nonextendable solution of the above problem. Moreover, x is continuous in (t, y).
We also call this function x(t; y) a nonextendable solution. Then, the following fact holds.
FACT 5. If f is continuous in (t, x, y) and continuously differentiable in (x, y), then the function x(t; y) is also continuously differentiable in y, and ∂ 2 x ∂t∂yi (t; y) can be defined and is equal to ∂ 2 x ∂yi∂t (t; y).
3. Basic result. Let n ≥ 2, P be an open subset of R n × R, Ω be a subset of R n , (p, m) ∈ P and f : P → Ω be continuous. Moreover, suppose that f is locally Lipschitz in the last variable: that is, for any compact subset Q ⊂ P , there exists a constant L > 0 such that for every (q, We say that f is integrable at (q, w) if and only if s ij (q, w) = s ji (q, w). If f is integrable at every point in P , then we say that f is integrable. Then, the following theorem holds.
Theorem 1. Let U be a convex set in R n with nonempty interior, and p ∈ int. U, (p, m) ∈ P . Then, there exists a continuously differentiable solution u of the following PDE: defined on U only if, for any q ∈ U , there exists a solution c(t; q) of the following ODE: defined on [0, 1]. If such a solution u : U → R exists, then it is unique, and u(q) = c(1; q). Moreover, if f is differentiable at (p, m) and the solution of (1) exists, then f is integrable at (p, m).

Remark 1.
If q is on the boundary of U , then u is differentiable at q ∈ U if and only if there exists a local differentiable extensionũ : V → R of u: that is, there exists a differentiable functionũ defined on an open neighborhood V of q, and if r ∈ V ∩ U , then u(r) =ũ(r). If u is differentiable at q, then define Du(q) = Dũ(q).
Proof. If such a solution u : U → R of (1) exists, define Then, c(t; q) is a solution of (2) defined on [0, 1]. By FACT 2, c(t; q) is the unique solution of (2) with parameter q, and thus the solution of (1) is also unique. Clearly c(1; q) = u(q). If f is differentiable at (p, m) and a continuously differentiable solution u : U → R of (1) can be defined, then the right-hand side of (1) is differentiable at p. Hence, u is twice differentiable. By a simple calculation, we have and thus, the integrability of f at (p, m) is equivalent to , and both ∂g ∂x1 and ∂g ∂x2 are differentiable at ( (3) holds, and thus f is integrable at (p, m). This completes the proof.
Our aim is to prove the converse of the above theorem. 3 Theorem 2. Let p, m, U satisfy the conditions of Theorem 1. If f is differentiable, locally Lipschitz, and integrable, and if the solution c(t; q) of equation (2) can be extended to [0, 1] for every q ∈ U , then u(q) ≡ c(1; q) is a solution of equation (1).
Proof. By FACT 4, the domain of the function c is open. Therefore, we can extend u to some open setŨ including U . Thus, to prove this theorem, it suffices to show that Du(q) = f (q, u(q)) for any q ∈ U such that U includes some neighborhood of the segment [p, q]. Fix such a q ∈ U , and define p(s) = (1 − s)p + sq. First, suppose that for every s ∈ [0, 1], there exists v s : V s → R such that V s is an open ball centered at p(s) and v s is a solution of the following differential equation: Dv s (r) = f (r, v s (r)), v s (p(s)) = c(s; q).
for r ∈ V ≡ ∪ s∈[0,1] V s , then v is well-defined and is a solution of (1) that satisfies v(p(s)) = c(s; q).
Clearly, the domain V of v includes an open and convex neighborhood of the segment [p, q]. Therefore, there exists a convex neighborhood W of q such that if r ∈ W and t ∈ [0, 1], then (1 − t)p + tr ∈ V . We can assume that V ⊂ U and W ⊂ U . If r ∈ W and t ∈ [0, 1], define d 1 (t) = c(t; r) and d 2 (t) = v((1 − t)p + tr). Then, and thus, This implies that and thus u is a solution of (1), as desired. Therefore, it suffices to show that for every s ∈ [0, 1], there exists a local solution of (4). We will only argue the case s = 0, because the proof of the other cases is almost the same. Thus, it suffices to show that there exists a local solution of (1).
Choose a > 0, b > 0 such that and let L > 0 be a constant such that if (r 1 , w 1 ), (r 2 , w 2 ) ∈ Π, then and M = max i,(r,w)∈Π |f i (r, w)|. Without loss of generality, we can assume that a > 0 is so small that aM ≤ b. Now, define z 0 (r) ≡ m, and define recursively Note that for every r ∈ Π 1 , we have (r, z 0 (r)) ∈ Π, and if z k−1 (r) can be defined on Π 1 and (r, z k−1 (r)) ∈ Π for every r ∈ Π 1 , then z k (r) can also be defined on Π 1 and (r, z k (r)) ∈ Π for every r ∈ Π 1 . Therefore it follows, by mathematical induction, that z k (r) is defined on Π 1 , and (r, z k (r)) ∈ Π for every r ∈ Π 1 . Now, let r 1 = n i=1 |r i |. Then,

YUHKI HOSOYA
Therefore, the sequence (z k (·)) ∈ C(Π 1 , R) is a Cauchy sequence with respect to the supremum norm, and thus it converges uniformly to a continuous function z : Π 1 → R. Next, suppose that the function z k−1 (·) is differentiable and Lipschitz on Π 1 . Then, the function is differentiable and Lipschitz on [0, 1] × Π 1 . Thus, by the dominated convergence theorem, and thus, z k (·) is also differentiable and Lipschitz. Moreover, 4 By mathematical induction, we have z k (r) is differentiable and Lipschitz. We will show that For k = 0, this is obvious. Suppose that it is true for k, and define 5 where the second equality follows from the integrability condition s ij = s ji of f . Thus, ≤ f j (r, z k (r)) − f j (r, z k+1 (r)) 1 4 We abbreviate the variables of functions. 5 We abbreviate f ((1 − t)p + tr, z k ((1 − t)p + tr)) as f and z k ((1 − t)p + tr) as z k . Hence, Let r be in the interior of Π 1 , and define If s = 0, by the mean value theorem, for some θ ∈ [0, 1], where the right-hand side tends to 0 uniformly on s, as k → ∞. Clearly this evaluation is valid even if s = 0. Therefore, the function s → ψ(s, k, r) converges uniformly to some function ψ(s, r) as k → ∞, where ψ is continuous in s and, clearly, as k → ∞. Thus, we have Dz(r) = f (r, z(r)). Further, because z k (p) ≡ m by definition, we have z(p) = m, and thus z is a solution of (1) defined on some neighborhood of p. This completes the proof.
There are three corollaries to this theorem. The first is a revival of a classical result in [6].
Corollary 1. Suppose that f : P → Ω is differentiable, locally Lipschitz, and integrable, and P includes the cube Π = (p, m) Then, there exists a unique solution u of (1) defined on Q.
Proof. By Theorem 2, it suffices to show that the solution c(t; q) of (2) can be extended to [0, 1] for all q ∈ Q. Suppose, on the contrary, that c(t; q) can be defined Ift < t * , thent < 1, and thus |c(t; q)−m| < b. However, in this case |c(t; q)−m| < b for some t such thatt < t < t * , which contradicts the definition oft. Therefore, t = t * . Thus, (p(t), c(t; q)) ∈ Π for any t ∈ [0, t * [, which contradicts FACT 3 and the definition of t * . This completes the proof.
The second corollary is an extension of Theorem 10.9.4 of [1].
Corollary 2. Suppose that f : P → Ω is differentiable, locally Lipschitz, and integrable. Then, for any (p, m) ∈ P , there exists an open and convex neighborhood U of p such that there exists a solution u : U → R of (1).
Proof. Let c(t; q) be the nonextendable solution of (2). Then, clearly Thus, the domain of the mapping t → c(t; p) is R itself. By FACT 4, the domain c : (t, q) → c(t; q) is open, and thus, we have that there exists an open and convex neighborhood U of p such that if q ∈ U , then the domain of t → c(t; q) includes [0, 1]. Therefore, Theorem 2 can be applied and this completes the proof.
The third corollary is an extension of Theorem 2 of [4].
Corollary 3. Suppose that P = R n ++ × R ++ and Ω = R n + . Let f : P → Ω be differentiable and locally Lipschitz, the matrix-valued function S f (p, m) = (s ij (p, m)) be negative semi-definite and symmetric, 7 and Walras' law p · f (p, m) = m holds for every (p, m) ∈ P . Then, for every (p, m) ∈ P , there exists a global solution u : R n ++ → R ++ of (1). Proof. By Theorem 2, this is equivalent to the extendability of the solution c(t; q) Define p(t) = (1 − t)p + tq and x = f (p, m). We need the following lemma: . Then, by a simple calculation, Meanwhile, by differentiating the both side of Walras' law Therefore, we can easily show that The proof of the rest of the claim is symmetrical and is omitted here.
By Lemma 1, we have that p(t) · x ≥ c(t; q), and thus lim sup t→t * c(t; q) < +∞. Therefore, lim inf t→t * c(t; q) = 0, and hence there exists a sequence (t k ) such that and thus the sequence (x k ) is bounded. Therefore, taking a subsequence, we can assume that a contradiction. This completes the proof.
Is there a function f that admits the existence of the solution of (1) but is not covered by Theorem 2? The following simple example answers this question. Theorem 1 is essential for determining the solution in this example. Example 1. Consider the following function This function is not differentiable if p 2 2 = 4p 1 m, and thus Theorem 2 is not applicable. We will guess a solution of the equation (1) defined by the above f , and verify that it is actually the (unique) solution. First, choose any q = (q 1 , q 2 ). If u is a solution of (1), then c(t) = u((1 − t)p + tq) satisfies the following ODE: Therefore, we can guess that if c(t) is a solution of (5) defined on [0, 1], then c(1) coincides with u(q). Second, define and considerċ To solve (6), we have and if q 2 = p 2 , then Third, suppose that p 2 = q 2 , 4p 1 m ≤ p 2 2 , and 4q 1 c 1 (1) ≤ q 2 2 , where c 1 (0) = m. Note that 4(p 1 + t(q 1 − p 1 ))c 1 (t) is monotone. Thus, in this case we have c(t) = c 1 (t) on [0, 1], and Therefore, we obtain the following candidate for u(q): if 2q 1 m p1 ≤ q 2 . Because f is homogeneous of degree zero, we can guess that u is homogeneous of degree one, and thus we can remove the assumption p 2 = q 2 . By an easy calculation, we can confirm that u is actually a solution of equation (1) on the set {q|2q 1 m p1 ≤ q 2 }. Fourth, suppose that p 2 = q 2 , 4p 1 m ≤ p 2 2 , and 4q 1 c 1 (1) > q 2 2 , where c 1 (0) = m. Note that q 1 = p 1 . If q 1 < p 1 , then c 1 (t) is decreasing and 4q 1 c 1 (1) < q 2 2 , which is absurd. Thus, we have q 1 > p 1 . We can guess that c(t) = c 1 (t) on [0, t * ], and and thus c(t * ) = c 1 (t * ) = c 2 (t * ) = p 2 2 4(p 1 + t * (q 1 − p 1 )) .

FIRST-ORDER PDES AND CONSUMER THEORY 1153
Then, and hence, we obtain Check that t * ∈ [0, 1] if and only if 2q 1 m p1 ≥ q 2 , which is equivalent to 4q 1 c 1 (1) ≥ q 2 2 . Because we have assumed 4q 1 c 1 (1) > q 2 2 , this assumption holds. Then, In particular, Therefore, we obtain the following candidate for u: where this form is homogeneous of degree one. Thus, we can guess that We can check that this u is actually a solution of (1) with u(p) = m.
By a similar argument, we obtain the following candidate for a solution u of (1) even if 4p 1 m > p 2 2 : It can easily be verified that this u is actually a solution of (1).

Application in economics:
The integrability problem. Throughout this section, we assume that P = R n ++ × R ++ and Ω = R n + . Consider the following (simple) optimization problem: where x is called the consumption bundle, p is called the price vector, and m is called the income. The value v(x) measures the goodness of the consumption bundle x for this consumer, and the function v is called the utility function. This problem represents the usual consumption problem. Under several conditions, for any p 0 and m > 0 there uniquely exists a solution f (p, m) of the above problem, and the function f is called the demand function of v. If v is increasing, then f clearly satisfies Walras' law p · f (p, m) = m. Now, consider the following observation problem: we can observe consumer's purchase behavior, and thus we can estimate the demand function f by purchase data. In contrast, the utility function v represents the consumer's preference, which is hidden in the consumer's mind. Therefore, to estimate v is difficult. Our problem is the following: can we reversely calculate the information of v by f ? If so, then we can estimate v using the estimated value of f and this reverse calculation method. In other words, we can reveal the consumer's preference by purchase data. In economics, this problem is called the integrability problem. Now, consider the following 'dual' problem: Let E x (q) be the value of the above dual problem: that is, This function is called the expenditure function in consumer theory. We will show the following theorem.
Theorem 3. Suppose that the demand function f : P → Ω of v is differentiable and satisfies Walras' law, and x = f (p, m). Then, the function E x is concave and twice differentiable, and DE x (q) = f (q, E x (q)) for every q ∈ R n ++ . 8 Moreover, E x (p) = m, and if v is continuous, then for every q ∈ R n ++ . Proof. Choose any q 1 , q 2 ∈ R n ++ , t ∈ [0, 1]. Fix any ε > 0, and suppose v(y) ≥ v(x) and q · y ≤ E x (q) + ε, where q = (1 − t)q 1 + tq 2 . Then, Because ε > 0 is arbitrary, we have that E x is concave, and thus continuous.
Next, suppose that v(y) ≥ v(x) and y = x. Because x = f (p, m), we have p · y > m. Meanwhile, v(x) ≥ v(x) and p · x = m. This implies that E x (p) = m.
Define x(q) = f (q, E x (q)). This function is continuous and q · x(q) = E x (q). Fix any ε > 0 and define x ε (q) = f (q, E x (q) + ε). By definition of E x (q), there exists y ∈ Ω such that v(y) ≥ v(x) and q · y < E x (q) + ε. This implies that v(x ε (q)) ≥ v(y) ≥ v(x). Hence, for any q, r ∈ R n ++ , we have q · x(q) = E x (q) ≤ q · x ε (r). If ε ↓ 0, then x ε (r) → x(r) and thus q · x(q) ≤ q · x(r). Now, let e i be the i-th unit vector and q(t) = q + te i . Then, Therefore, where both limits exist and lim t↓0 because E x is concave. This means that ∂E x ∂qi (q) = f i (q, E x (q)), and thus we have DE x (q) = f (q, E x (q)), as desired.
Therefore, DE x (q) is continuous, and thus f (q, E x (q)) is differentiable. Hence, DE x (q) is differentiable, and thus E x (q) is twice differentiable.
Next, let v be continuous, and define z = f (q, E y (q)) and w = f (q, E x (q)). By definition, for every ε > 0, there exists z such that v(z ) ≥ v(y) and q·z < E y (q)+ε.
by definition, and thus we have v(z) ≤ v(y), which implies that v(z) = v(y). By the same arguments, we have v(x) = v(w). Then, as desired. This completes the proof.
This theorem includes a key idea for solving the integrability problem. We separate this idea into three steps.
• In the first step, we assume that f is a demand function of v, and that we know v. If v is continuous, then for any fixedp ∈ R n ++ , the mapping v f,p : x → E x (p) has the same information as v.
• In the second step, we again assume that f is a demand function of v, and that v is continuous. However, in this step, v is assumed to be unknown. In this case, the above v f,p still has the same information as v. Because v is unknown, however, we cannot calculate v f,p (x) directly. In this case, Theorem 3 is important. Actually, we know that E x (p) = m. Further, E x is a solution of (1). Therefore, if f is locally Lipschitz in m, we can apply the uniqueness result in Theorem 1, and thus E x is the unique solution of (1). Hence, we can calculate v f,p (x) = E x (p) = u(p) by solving the equation (1) or, alternatively, the equation (2) with q =p and calculating c(1;p). 9 • In the third step, only the function f is given. We do not know whether this function is a demand function of some utility function v. In this case, there may be no solution of (1). If there is a unique solution E x of (1) for every x = f (p, m), then v f,p : x → E x (p) can be calculated. However, it is not known whether f is a demand function of v f,p . Therefore, we need a verification procedure for f to be a demand function of some utility function. Actually, the following theorem holds.
Theorem 4. Suppose that f : P → Ω is continuous, locally Lipschitz in m, and satisfies Walras' law. Let the following two conditions hold: (I) For every (p, m), there exists a concave solution u : R n ++ → R ++ of (1). (II) If x = y, x = f (p, m), y = f (q, w), and w ≥ u(q), (where u : R n ++ → R ++ is a solution of (1),) then p · y > m. Then, f is a demand function of some utility function.
Remark 2. If f is a demand function of some utility function v, then (I) holds by Theorem 3. Moreover, if v is continuous and x = f (p, m), y = f (q, w), x = y and w ≥ u(q), then E y (q) = w ≥ u(q) = E x (q), and thus v(y) ≥ v(x) by Theorem 3. If p · y ≤ m, then v(y) ≤ v(x), and thus v(y) = v(x). However, this contradicts that x = f (p, m) is the unique solution of the consumption problem with (p, m). Therefore, we have p · y > m, and (II) holds. Thus, (I) and (II) are necessary conditions for the demand function of the continuous utility function. Theorem 4 says that, under several conditions, they are also sufficient conditions for the demand function.
Proof. Suppose that (I) and (II) hold. Let x = y, x = f (p, m), y = f (q, w), p·y ≤ m, and u 1 , u 2 : R n ++ → R ++ be solutions of with u 1 (p) = m and u 2 (q) = w, respectively. By contraposition of (II), we have that u 1 (q) > w = u 2 (q), and thus, we have u 1 (r) > u 2 (r) for every r ∈ R n ++ . 10 In particular, m = u 1 (p) > u 2 (p), and hence, we have q · x > w by (II). To summarize this argument, if x = y, x = f (p, m), y = f (q, w) and p · y ≤ m, then q · x > w. (In economics, this property is called the weak axiom of revealed preference.) Suppose that and thus by the weak axiom of revealed preference, we have p · y > m and q · y > w, which implies that p(t) · y > m(t), a contradiction. Therefore, Meanwhile, if u is a solution of (10) with u(p) = m, and c(t) = u((1 − t)p + tq), thenċ (t) = (q − p) · f (p(t), c(t)), c(0) = m. Therefore, both c(t) and m(t) are solutions of the same ODE. By the local Lipschitzian assumption in m and FACT 2, such a solution is unique, and thus c(t) ≡ m(t). In particular, u(q) = c(1) = m(1) = w. Fix anyp ∈ R n ++ and define v f,p (x) = 0 if x is not in the range of f , and if x = f (p, m), then v f,p (x) = u(p), where u : R n ++ → R ++ is a solution of (10) with u(p) = m. Then, the above fact implies that the definition of v f,p (x) does not depend on the choice of (p, m) ∈ f −1 (x).
Next, let x = f (p, m), x = y and p · y ≤ m. If y is not in the range of f , then v f,p (y) = 0 < v f,p (x). Otherwise, let u 1 (resp. u 2 ) be the solution of (10) with u 1 (p) = m (resp. u 2 (q) = w). By contraposition of (II), we have u 1 (q) > w = u 2 (q). This implies that u 1 (p) > u 2 (p), and thus v f,p (x) > v f,p (y). Therefore, we have that f is the demand function of the utility function v f,p . This completes the proof.
By Corollary 3, we can obtain the following result.
Corollary 4. Suppose that f : P → Ω is differentiable, locally Lipschitz, and satisfies Walras' law. Moreover, suppose that the matrix-valued function S f (p, m) = (s ij (p, m)) (hereafter, the Slutsky matrix) is negative semi-definite and symmetric. Then, f is a demand function of some utility function.
Remark 3. If f is a demand function of some utility function, then for every (p, m) with x = f (p, m), , E x (p) = m, and thus, is negative semi-definite. Therefore, the above corollary states that the negative semi-definiteness and symmetry of the Slutsky matrix S f (p, m) is the necessary and sufficient condition for f to be a demand function.
Proof. By Corollary 3, we have that for every (p, m), there exists a solution u : R n ++ → R ++ of (10) with u(p) = m. Then, is negative semi-definite, and thus u must be concave and statement (I) of Theorem 4 holds. Let u 1 : R n ++ → R ++ be the solution of (10) with u 1 (q) = w, and define p(t) = (1 − t)p + tq and d(t) = p · f (p(t), u 1 (p(t))). If u 1 (q) = w > u(q), then u 1 (p) > u(p). In the proof of Lemma 1, we have shown that d is nondecreasing, and thus, Next, suppose that u 1 (q) = w = u(q). Then, u 1 (r) = u(r) for every r ∈ R n ++ . Thus, p · y = d(1) and m = p · x = d(0). To show that statement (II) holds, it suffices to show that d(1) > d(0). Suppose not. Because d is nondecreasing, we have d(t) ≡ d(0) = m for every t ∈ [0, 1]. We abbreviate S f (p(t), u(p(t))) as S t . In the proof of Lemma 1, we showed thaṫ Because S t is symmetric and negative semi-definite, we have that there exists a symmetric, positive semi-definite matrix A t such that S t = −A 2 t . 11 Then, d(t) = t A t (q − p) 2 = 0, for every t ∈ [0, 1], and thus if t > 0, we have A t (q − p) = 0. Define Y (t) = f (p(t), u(p(t))). Then, we have Y (0) = x, Y (1) = y, and for every t ∈]0, 1], which implies that x = y, a contradiction. Therefore, we have d(1) > d(0), and (II) holds. This completes the proof.
where P is an orthogonal matrix and λ 1 , ..., λn are eigenvalues of St, then In general, statement (II) of Theorem 4 is rather hard to verify, and thus a more reasonable sufficient condition is needed. The following theorem gives such a condition.
Theorem 5. Suppose that there exists a finite family of C 1 -class functions f 1 , ..., f N : P → R n that satisfy Walras' law and a partition (A 1 , ..., A N ) of P such that for (p, m) ∈ A i . Moreover, suppose that the Slutsky matrix S f i (p, m) is symmetric and negative semi-definite for all i. Then, statement (I) of Theorem 4 implies statement (II) of Theorem 4.
Proof. It is clear that f is locally Lipschitz. First, define Note that because f is locally Lipschitz, df (p, m; q, w) is nonempty if (p, m) ∈ P . We will show that, for every (p, m) ∈ P and (q, w) ∈ R n × R, there exists v ∈ df (p, m; q, w) and a sequence ((p k , m k )) such that f is differentiable at (p k , m k ), Fix (q, w) ∈ R n × R. Because f is locally Lipschitz, we can use Rademacher's theorem, and thus, we have that f is differentiable at almost every point. If f is differentiable at (p, m), there exists i and a sequence (t k ) of positive real numbers such that t k ↓ 0 and (p + t k q, m + t k w) ∈ A i . By the continuity, we have f (p, m) = f i (p, m) and thus, Therefore, if we define Choose any (p, m) ∈ R n ++ × R ++ and v ∈ df (p, m; q, w), and a sequence (t k ) of positive real numbers such that t k ↓ 0 and v = lim Taking a subsequence, we can assume that there exists i such that for every k, (p + t k q, m + t k w) is in the closure of B i . Then, (p, m) is also in the closure of B i , and by the continuity of f , we have f (p, m) = f i (p, m) and f (p + t k q, m + t k w) and thus, if we choose any sequence ((p k , m k )) in B i such that (p k , m k ) → (p, m), then Df (p k , m k )(q, w) = Df i (p k , m k )(q, w) → Df i (p, m)(q, w) = v, as desired.
Next, we introduce a lemma.
Proof of Lemma 2. Suppose not. Then, there exists t 1 , t 2 ∈ [0, 1] such that t 1 < t 2 and d(t 1 ) > d(t 2 ). Let Then, c(t 1 ) = c(t 2 ) = 0. Because c(t) is continuous on [t 1 , t 2 ], there exists t * ∈]t 1 , t 2 [ such that c(t * ) attains either the maximum or the minimum on [t 1 , t 2 ]. If c(t * ) attains the minimum, then where L > 0 is some positive constant whose existence is ensured by the local Lipschitz property. By the above fact, there exists v ∈ df (p(t * ), u(p(t * )); q − p, w * ) and a sequence (p k , m k ) such that (p k , m k ) → (p(t * ), u(p(t * ))) and On the other hand, If u k is a solution of (10) with the initial value condition u k (p k ) = m k , then D 2 u k (p k ) = S f (p k , m k ). Because u k is concave, S f (p k , m k ) is negative semi-definite. Moreover, in the proof of Lemma 1, we have already obtained by Walras' law. Therefore, where the left-hand side tends to zero as k → ∞ because of the local Lipschitz condition of f . This implies that lim sup a contradiction. Similarly, if c attains the maximum at t * , we can lead a contradiction. This completes the proof of Lemma 2.
Let w * = d ds u 1 (p(s)) s=t * , v ∈ df (p(t * ), u 1 (p(t * )); q − p, w * ), (p k , m k ) → (p(t * ), u 1 (p(t * ))), and Df (p k , m k )(q −p, w * ) → v as k → ∞, and let S k denote S f (p k , m k ). Note that by Young's theorem, we have that S k is symmetric and negative semidefinite, and thus there exists a symmetric and positive semi-definite matrix A k such that S k = −A 2 k . Then, Becauseḋ(t * ) = 0, we have A k (q − p) → 0 as k → ∞ by (11) and (12). 12 Meanwhile, by the same arguments as in Lemma 2, we have where L > 0 is some constant. Therefore, we havė This implies thaṫ 12 Note that (11) holds with equality becauseḋ(t * ) exists. which is absurd. 13 This completes the proof.

Example 2.
Consider the function otherwise, in example 1. Then, the solution of (10) is generally solved by (8) and (9), and the solution u is concave in both cases. By Theorem 5, this function f satisfies both (I) and (II) of Theorem 4. Setp = (1, 1). By an easy computation, we can obtain v f,p (x) in the proof of Theorem 4, and Note that for x ≥ 0 with x 1 > 0, this function v gives the same order as v f,p . Readers can easily check that f (p, m) is the unique solution of the following problem: Therefore, using Theorems 4 and 5, we have successfully recovered the information of v on {x ∈ R 2 + |x 1 > 0}. 14 BecauseΠ is compact, there exists L > 0 such that if (t, x 1 , y), (t, x 2 , y) ∈Π, then Moreover, because f is continuous on P , it is uniformly continuous onΠ, and thus there exists a nondecreasing nonnegative function β(e) such that lim e↓0 β(e) = 0 and, if (t, x, y 1 ), (t, x, y 2 ) ∈Π, then Choose any t ∈ [t 0 , r 2 ]. If x(t; y) is defined and (s, x(s, y), y) ∈Π for all s ∈ [t 0 , t], then [L x(s; y) − x(s; y * ) + β( y − y * )]ds.
Proof of FACT 5. First, we will prove a lemma.
This completes the proof.
where the second equality comes from FACT 5. Thus, h i (t; q) is a solution of a linear differential equationḣ i = a(t)h i with h i (0; q) = 0, where a(t) = ∂f ∂w ((1 − t)p + tq, c(t; q)) · (q − p).
This completes the proof.