The Unapologetic Mathematician

Mathematics for the interested outsider

Sine and Cosine

Now I want to consider the differential equation f''(x)+f(x)=0. As I mentioned at the end of last time, we can write this as f''(x)=(-1)f(x) and find two solutions — \exp(ix) and \exp(-ix) — by taking the two complex square roots of -1. But the equation doesn’t use any complex numbers. Surely we can find real-valued functions that work.

Indeed, we can, and we’ll use the same techniques as we did before. We again find that any solution must be infinitely differentiable, and so we will assume that it’s analytic. Thus we write

\displaystyle f(x)=\sum\limits_{k=0}^\infty a_kx^k

and we take the first two derivatives

\displaystyle f'(x)=\sum\limits_{k=0}^\infty(k+1)a_{k+1}x^k
\displaystyle f''(x)=\sum\limits_{k=0}^\infty(k+2)(k+1)a_{k+2}x^k

The equation then reads

\displaystyle a_{k+2}=-\frac{a_k}{(k+2)(k+1)}

for every natural number k. The values a_0=f(0) and a_1=f'(0) are not specified, and we can use them to set initial conditions.

We pick two sets of initial conditions to focus on. In the first case, f(0)=0 and f'(0)=1, while in the second case f(0)=1 and f'(0)=0. We call these two solutions the “sine” and “cosine” functions, respectively, writing them as \sin(x) and \cos(x).

Let’s work out the series for the cosine function. We start with a_1=0, and the recurrence relation tells us that all the odd terms will be zero. So let’s just write out the even terms a_{2k}. First off, a_0=1. Then to move from a_{2k} to a_{2k+2)} we multiply by \frac{-1}{(2k+1)(2k+2)}. So in moving from a_0 all the way to a_{2k} we’ve multiplied by -1 k times, and we’ve multiplied up every number from {1} to 2k. That is, we have a_{2k}=\frac{(-1)^k}{(2k)!}, and we have the series

\displaystyle\cos(x)=\sum\limits_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!}

This isn’t the usual form for a power series, but it’s more compact than including all the odd terms. A similar line of reasoning leads to the following series expansion for the sine function:

\displaystyle\sin(x)=\sum\limits_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}

Any other solution with f(0)=a and f'(0)=b then can be written as a\cos(x)+b\sin(x).

In particular, consider the first solutions we found above: f(x)=\exp(ix) and f(x)=\exp(-ix). Each of them has f(0)=1, and f'(0)=\pm i, depending on which solution we pick. That is, we can write \exp(ix)=\cos(x)+i\sin(x), and \exp(-ix)=\cos(x)-i\sin(x).

Of course, the second of these equations is just the complex conjugate of the first, and so it’s unsurprising. The first, however, is called “Euler’s formula”, because it was proved by Roger Cotes. It’s been seen as particularly miraculous, but this is mostly because people’s first exposure to the sine and cosine functions usually comes from a completely different route, and the relationship between exponentials and trigonometry seems utterly mysterious. Seen from the perspective of differential equations (and other viewpoints we’ll see sooner or later) it’s the most natural thing in the world.

Euler’s formula also lets us translate back from trigonometry into exponentials:

\displaystyle\cos(x)=\frac{\exp(ix)+\exp(-ix)}{2}
\displaystyle\sin(x)=\frac{\exp(ix)-\exp(-ix)}{2i}

And from these formulæ and the differentiation rules for exponentials we can easily work out the differentiation rules for the sine and cosine functions:

\displaystyle\sin'(x)=\cos(x)
\displaystyle\cos'(x)=-\sin(x)

About these ads

October 13, 2008 - Posted by | Analysis, Calculus, Differential Equations

21 Comments »

  1. I’ve seen these definitions of sine and cosine before. I’ve seen descriptions that the two principle solutions of f”+f=0 are sine and cosine, showing how sin’ = cos, and cos’ = -sin, so sin”+sin = -sin+sin = 0 and cos”+cos = -cos+cos = 0. So the traditional, geometric definitions of sine and cosine make this definition work.

    What I’ve not seen is the other way around. Given s(t), c(t) which satisfy s”+s=0, s(0)=0, s’(0)=1 and c”+c=0, c(0)=1, c’(0)=0, how do you show the usual properties of sin and cosine? Can you show that s, c are periodic with period 2*pi? That s is odd, c is even? both are bounded at 1, -1? That one is the other shifted by pi/2? I don’t know how to, but I’m sure it must be doable.

    Comment by Blaise Pascal | October 13, 2008 | Reply

  2. Patience, Blaise. I can’t fit it all into a single post.

    Most of this is pretty straightforward from the last few formulæ. The one catch is the period. That one I’ll leave you to puzzle over for a while.

    Comment by John Armstrong | October 13, 2008 | Reply

  3. I know the catch is the period. I’ve tried to work this through in the past.

    I can get bounded easily enough: look at g=s^2+c^2. g’=2ss’+2cc’ = 2sc – 2cs = 0, so s^2+c^2 is constant. Since g(0) = 1, and s, c are both real functions, s^2, c^2 are both non-negative, so s^2, c^2 are both bounded by 1, so -1 <= s, c 0, s(0)=s(t)=0. In that case, c(t) can only be 1, -1. If there are multiple zero-crossings of s, each with slopes 1, -1, then these zero-crossings must alternate (s can’t go from positive to negative twice in a row without ever going from negative to positive). If c(t) = 1, we have a period of t. So for s to be non-periodic, it must have at most one zero-crossing at some t>0.

    It’s not inconceivable that s is bounded but has no zero-crossings. f=1+1/(x+1) is bounded by 1 and only crosses 0 once. f even has a few properties with s: when both s and f are positive, they both curve down, they both cross the origin with slope 1, etc. But where f”(t) gets smaller as t gets larger, s”(t) grows (negatively) as s(t) does. s’ (aka c) gets smaller, as s gets closer to 1, and at an larger and larger rate. s can’t avoid a maximum without c avoiding a zero-crossing, but c has a slope of -s, so the more s goes up, the faster c goes down. So c must have a zero crossing, s must achieve a maxima of 1, and then s continues to curve down, ever-increasing, as c is negative and growing. It can’t curve up (and thus find an asymptote) without crossing 0 again. Therefore (in this not-quite-rigorous form), s must have a +to- zero crossing at some t>0.

    The same argument can be made for a u>t, such that s(u)=0. Looking at s(v), v>t, s starts negative, but curves up. c(v) starts at a minima, and gets larger, also curving up. Since c(v) can’t curve down without a zero-crossing, it can’t have an asymptote, so it must zero-cross. Therefore, s(v) must have a minima, and then rise again. Once rising, it can’t have an asymptote, so must zero-cross at u.

    Therefore, there must be a u such that s(0)=s(u) and c(0)=c(u), so s, c must be periodic with period u.

    I can get that since the Taylor series for sin(x) only has odd powers, sin(x) is an odd function, and since cos(x) only has even powers, it’s an even function. My “argument” for periodicity even hints that s, c are shifted versions on one another with a shift of t/2. I’ll even admit that the relationship s^2+c^2=1 is very suggestive of a circle, but I still don’t see how to get t=pi.

    Comment by Blaise Pascal | October 13, 2008 | Reply

  4. I’ll still let you stew, but I might crib from your comment a bit tomorrow :D

    Comment by John Armstrong | October 13, 2008 | Reply

  5. Great… tomorrow is going to be an explanation of how all the properties of sine and cosine can be derived from the power series and differential equation — except the period, which is obviously going to wait for a later date…..

    Comment by Blaise Pascal | October 13, 2008 | Reply

  6. I’ll tell you this much: you’re not thinking enough about what \pi is.

    And to be extra-mysterious: it’s the same as the reason that the gravitational constant g\approx9.8 at the surface of the earth is almost exactly \pi^2.

    Comment by John Armstrong | October 13, 2008 | Reply

  7. I’m also not thinking about how to put latex into comments. How do you do that? It would make my presentation better.

    Comment by Blaise Pascal | October 14, 2008 | Reply

  8. You wrap it in $latex and $. It works the same at any other WordPress-hosted blath. More information is here.

    Comment by John Armstrong | October 14, 2008 | Reply

  9. The “extra-mysterious fact” makes no sense to me: the constant g depends on choice of dimensional units and also on physical (not geometric) data, such as mass of the earth. So, I’ll bite.

    Comment by Todd Trimble | October 14, 2008 | Reply

  10. [...] and Cosine Blaise got most of the classic properties of the sine and cosine in the comments to the last post, so I’ll crib generously from his work. As a note: I know many people write powers of the [...]

    Pingback by Properties of the Sine and Cosine « The Unapologetic Mathematician | October 14, 2008 | Reply

  11. Todd,

    you’re right that the constant g depends on a bunch of extra-mathematical things. But the fact that g \approx \pi^2 corresponds to the fact that a pendulum of length one meter has a period of very nearly two seconds. Indeed, this was thrown around as a definition for the meter, but it’s not all that practical if you don’t have accurate time measurement, and also the gravitational acceleration isn’t constant over the earth’s surface. I wrote about this a year ago and got lots of readers.

    Comment by Michael Lugo | October 14, 2008 | Reply

  12. That’s very interesting, Michael; I was quite unaware of the “history of the meter”, and I thank you for bringing this to my attention. But I think your readers arivero and Richard Koehler made very pertinent points: that the operational definition of meter as 10^{-7} distance from equator to North pole [the one I had in mind] comes close to the definition via pendulum, must be counted as an amusing coincidence.

    Comment by Todd Trimble | October 14, 2008 | Reply

  13. [...] series can be used to solve certain differential equations, which led us to defining the functions sine and cosine. Then I showed that the sine function must have a least positive zero, which we define to be [...]

    Pingback by Pi: A Wrap-Up « The Unapologetic Mathematician | October 16, 2008 | Reply

  14. Solving of f” + f = 0 by integration.

    There is an alternative way for solving the linear second order ODE :

    f” + f = 0, (1)

    Taking f’=df/dx, then eq.(1) can be written in the form :

    d(df/dx)/dx + f = 0, (2)

    and by using the d/dx=(df/dx)*d/df, therefore eq.(2) becomes

    (df/dx)d(df/dx)/df + f = 0 , (3)

    Performing integration respect to df in eq.(3) yeilds :

    0.5*(df/dx)^2 + 0.5*f^2 = c, (4)

    If we set 2*c=1, then eq.(4) gives :

    df/dx = ± √(1 – f^2), (5)

    There are two values of df/dx, the plus value of (df/dx) gives :

    x = arcsine(f) = ∫ df/√(1 – f^2) , (6)

    while from ninus value of (df/dx) we find

    x = arccosine(f) = – ∫ df/√(1 – f^2), (7)

    Hence from eq.(6) and eq.(7) we can write the exact solution of f satisfied eq.(1) as following :

    f(x) = a*sin(x) + b*cos(x) , (8)

    where both a and b are arbitrary constants.

    Comment by Rohedi | November 29, 2008 | Reply

  15. Hi Rohedi,

    I need your help to solve d^2y/dt^2 + ay = b using another way of simple integration? Thx.

    Comment by Denaya | January 18, 2009 | Reply

  16. Sorry, my purpose @rohedi still solve the linear second order differential equation using integration technique, but without using the combination arcsine(x) and arccos(x) as explained by your’s comment previously. Please help me..

    Comment by Denaya | January 18, 2009 | Reply

  17. Sorry miss Denaya, this is blog math of mr.John Armstrong. So your equation must be addressed for him not for me. Hopefully he appreciates to your equation.

    Comment by rohedi | January 20, 2009 | Reply

  18. Sorry, Denaya, I’m not sure what you need. I’m not really a differential equations guy…

    What you might try is what I was doing here: assume that the function y is a power series in x. Then take the second derivative formally (term-by-term) and turn the differential equation into a recurrence relation for the coefficients.

    Comment by John Armstrong | January 21, 2009 | Reply

  19. Oh thank for your guidance mr.John Armstrong. Now, Denaya believes that under right trick your scheme is not only can solve my problem d^2y/dt^2 + ay = b, but for more general form of second order linear ordinary differential equation. Again, thank for your attention.

    Comment by Denaya Lesa | January 23, 2009 | Reply

  20. Apologise mr.John Armstrong, I knew this math blog from http://metrostateatheists.wordpress.com/2008/12/16/differential-equations-how-they-relate-to-calculus/.

    Comment by Denaya Lesa | January 23, 2009 | Reply

  21. [...] numbers there was one perspective I put off, and now need to come back to. It makes deep use of Euler’s formula, which ties exponentials and trigonometric functions together in the [...]

    Pingback by Complex Numbers and the Unit Circle « The Unapologetic Mathematician | May 26, 2009 | Reply


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 366 other followers

%d bloggers like this: