The Unapologetic Mathematician

Mathematics for the interested outsider

Vector-Valued Functions

Now we know how to modify the notion of the derivative of a function to deal with vector inputs by defining the differential. But what about functions that have vectors as outputs?

Well, luckily when we defined the differential we didn’t really use anything about the space where our function took its values but that it was a topological vector space. Indeed, when defining the differential of a function f:\mathbb{R}^m\rightarrow\mathbb{R}^n we need to set up a new function that takes a point x in the Euclidean space \mathbb{R}^m and a displacement vector t in \mathbb{R}^m as inputs, and which gives a displacement vector df(x;t) in \mathbb{R}^n as its output. It must be linear in the displacement, meaning that we can view df(x) as a linear transformation from \mathbb{R}^m to \mathbb{R}^n. And it must satisfy a similar approximation condition, replacing the absolute value with the notion of length in \mathbb{R}^n: for every \epsilon>0 there is a \delta>0 so that if \delta>\lVert t\rVert_{\mathbb{R}^m}>0 we have

\displaystyle\lVert\left[f(x+t)-f(x)\right]-df(x)t\rVert_{\mathbb{R}^n}<\epsilon\lVert t\rVert_{\mathbb{R}^m}

From here on we’ll just determine which norm we mean by context, since we only have one norm on each vector space.

Okay, so we can talk about differentials of vector-valued functions: the differential of f at a point x (if it exists) is a linear transformation df(x) that turns displacements in the input space into displacements in the output space, and does so in the way that most closely approximates the action of the function itself. But how do we define the function?

If we pick an orthonormal basis e_i for \mathbb{R}^n we can write the components of f as separate functions. That is, we say

\displaystyle f(x)=f^i(x)e_i

Now I assert that the differential can be taken component-by-component, just as continuity works: df(x)=df^i(x)e_i. On the left is the differential of f as a vector-valued function, while on the right we find the differentials of the several real-valued functions f^i. The differential exists if and only if the component differentials do.

First, from components to the vector-valued function. Clearly this definition of df(x) gives us a linear map from displacements in \mathbb{R}^m to displacements in \mathbb{R}^n. But does it satisfy the approximation inequality? Indeed, for every \epsilon>0 we can find a \delta so that all the inequalities

\displaystyle\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert<\frac{\epsilon}{n}\lVert t\rVert

are satisfied when \delta>\lVert t\rVert>0. Of course, there are different \deltas that work for each component, but we can pick the smallest of them. Then it’s a simple matter to find

\displaystyle\begin{aligned}\left\lVert\left[f(x+t)-f(x)\right]-df(x)t\right\rVert&=\left\lVert\left(\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right)e_i\right\rVert\\&\leq\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert\lVert e_i\rVert\\&=\sum\limits_{i=1}^n\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert\\&<\sum\limits_{i=1}^n\frac{\epsilon}{n}\lVert t\rVert\\&=\epsilon\lVert t\rVert\end{aligned}

so if the component functions are differentiable, then so is the function as a whole.

On the other hand, if the differential df(x) exists then for every \epsilon>0 there exists a \delta>0 so that if \delta>\lVert t\rVert>0 we have

\displaystyle\lVert\left[f(x+t)-f(x)\right]-df(x)t\rVert<\epsilon\lVert t\rVert

But then it’s easy to see that

\displaystyle\begin{aligned}\left\lvert\left[f^k(x+t)-f^k(x)\right]-df^k(x)t\right\rvert&=\sqrt{\left\lvert\left[f^k(x+t)-f^k(x)\right]-df^k(x)t\right\rvert^2}\\&\leq\sqrt{\sum\limits_{i=1}^n\left\lvert\left[f^i(x+t)-f^i(x)\right]-df^i(x)t\right\rvert^2}\\&=\lVert\left[f(x+t)-f(x)\right]-df(x)t\rVert\\&<\epsilon\lVert t\rVert\end{aligned}

and so each of the component differentials exists.

Finally, I should mention that if we also pick an orthonormal basis \tilde{e}_j for the input space \mathbb{R}^m we can expand each component differential df^i(x) in terms of the dual basis dx^j:

\displaystyle df^i(x)=\frac{\partial f^i}{\partial x^1}dx^1+\dots+\frac{\partial f^i}{\partial x^m}dx^m=\frac{\partial f^i}{\partial x^j}dx^j

Then we can write the whole differential df(x) out as a matrix whose entry in the ith row and jth column is \frac{\partial f^i}{\partial x^j}. If we write a displacement in the input as an m-dimensional column vector we find our estimate of the displacement in the output as an n-dimensional column vector:

\displaystyle\begin{pmatrix}df^1(x;t)\\\vdots\\df^n(x;t)\end{pmatrix}=\begin{pmatrix}\frac{\partial f^1}{\partial x^1}&\dots&\frac{\partial f^1}{\partial x^m}\\\vdots&\ddots&\vdots\\\frac{\partial f^n}{\partial x^1}&\dots&\frac{\partial f^n}{\partial x^m}\end{pmatrix}\begin{pmatrix}t^1\\\vdots\\t^m\end{pmatrix}

October 6, 2009 Posted by | Analysis, Calculus | 5 Comments