The Unapologetic Mathematician

Mathematics for the interested outsider

Properties of the Sine and Cosine

Blaise got most of the classic properties of the sine and cosine in the comments to the last post, so I’ll crib generously from his work. As a note: I know many people write powers of the sine and cosine functions as \sin^2(x) (for example) instead of \sin(x)^2. As I tell my calculus students every year I refuse to do that myself because that should mean \sin(\sin(x)), and I guarantee people will get confused between \sin^{-1}(x)=\arcsin(x) or \sin^{-1}(x)=\frac{1}{\sin(x)}

First, let’s consider the function g(x)=\sin(x)^2+\cos(x)^2. We can take its derivative using the rules for derivatives of trigonometric functions from last time:

\displaystyle g'(x)=2\sin(x)\cos(x)-2\cos(x)\sin(x)=0

So this function is a constant. We easily check that g(0)=1, and so \sin(x)^2+\cos(x)^2=1.

What does this mean? It tells us that if \sin(x) and \cos(x) are the lengths of the legs of a right triangle, the hypotenuse will have length {1}. Alternately, the point with coordinates (\cos(x),\sin(x)) in the standard coordinate plane will lie on the unit circle. We haven’t talked yet about using integration to calculate the length of a path in the plane, but when we do we’ll see that the length of the arc on the circle from (1,0) to (\cos(x),\sin(x)) is exactly x.

This gives us another definition for the sine and cosine functions — one closer to the usual one people see in a trigonometry class. Given an input value x, walk that far around the unit circle, starting from the point (1,0). The coordinates of the point you end up at are the sine and cosine of x. And this gives us our “original” definitions: given a right triangle, it is similar to a right triangle whose hypotenuse has length {1}, and the sine and cosine are the lengths of the two legs.

Now, since \sin(x)^2 and \cos(x)^2 are both nonnegative, they must each be bounded above by g(x)=1. Thus -1\leq\sin(x)\leq1 and -1\leq\cos(x)\leq1. More specifically, any time that \sin(x_0)=0 we must have \cos(x_0)=\pm1.

We know that \sin(0)=0 and \cos(0)=1, so if we ever have another point t where \sin(t)=0 and \cos(t)=1 we have a period. This is because the differential equation will determine the future behavior of \sin(t+x) the same way it determined the behavior of \sin(0). In fact, if \sin(p)=0 and \cos(p)=-1, then the future behavior of \sin(p+x) will be exactly the negative of the behavior of \sin(x), and so eventually \sin(2p)=0 and \cos(2p)=1 again.

Admittedly, I’m sort of waving my hands here without an existence/uniqueness proof for solving differential equations. But the geometric intuition should suffice for the idea that since the function’s value and first derivative at {0} are enough to determine the function, then the specific point we know them at shouldn’t matter.

So, does the sine function have a positive zero? That is, is there some p>0 so that \sin(p)=0? If so, the lowest such one would have to have \cos(p)=-1 (because positive numbers near {0} have positive sines). The next one would then be \sin(2p)=0 with \cos(2p)=1, and the whole thing repeats with period 2p.

The function \sin(x) starts out increasing, and so \cos(x) decreases (since \cos(x)^2=1-\sin(x)^2. If \sin(x) has a maximum, then \cos(x) (its derivative) must cross zero. Then \sin(x) is decreasing, and it cannot increase again unless \cos(x) crosses zero again. But if \cos(x) crosses zero again it must have passed through a local extremum (Rolle) and so \sin(x) cannot increase again before it crosses zero itself.

So if we are to avoid \sin(x) having a positive zero, it must either increase to some asymptote below {1}, or it must increase to a maximum and then decrease to some asymptote below {1}. But for a function to have an asymptote it must approach a horizontal line, and its derivative must approach {0}. That is, we can only have \sin(x) approaching an asymptote at y=1, while \cos(x) approaches an asymptote at y=0.

But if \cos(x) approaches an asymptote, its derivative must also asymptotically approach {0}. But this derivative is -\sin(x), which we are assuming approaches -1! And so none of these asymptotes are possible!

So the sine function must have a positive zero: \sin(p)=0. And thus the sine and cosine (and all other solutions to this differential equation) will have period 2p.

Finally, what the heck is this value p? In point of fact, we have no way of telling. But it might come in handy, so we’ll define this number and give it a new name: \pi. Whenever we say \pi we’ll mean “the first positive zero of the sine function”.

Here I want to point out that I’ve fulfilled my boast of a few months ago on some other weblog. In my tireless rant against the \pi-fetishism that infests the geek community, I told someone that \pi can be derived, ultimately, from solely the properties of the real number system. Studying this field — itself uniquely specified on algebraic and topological grounds — leads us to both differential calculus and to power series, and from there to series solutions to differential equations. One of the most natural differential equations in the world thus gives rise to the trigonometric functions, and the definition \pi follows from their properties. There is no possible way it could be anything other than what it is when you see it from this side, while the geometric definition hinges on some very deep assumptions on the geometry of spacetime.

October 14, 2008 - Posted by | Analysis, Calculus

12 Comments »

  1. I recently was helping a student who wrote sin-2 x for one divided by the square of the sine of x. I pretty quickly pushed her away from that.

    And the fact that sin-1 x means the arcsin to most people who read it may explain why we bother to keep the cosecant around. (Similarly for secant and cotangent.) Of course, one could argue that it makes sense to have six trigonometric ratios since there are six possible quotients of sides in a right triangle, but I suspect the fact that jettisoning csc, sec, cot would cause notational ambiguity explains why we still see them.

    Comment by Michael Lugo | October 14, 2008 | Reply

  2. A similar, but slightly slicker, discussion of these issues
    (especially, the periodicity of sine and cosine) can be
    found on pages 5 and 6 of

    Click to access 243functions3.pdf

    The slickness is exhibited therein is certainly not my own. I am 99% sure that this derivation is taken from Rudin’s _Principles of Mathematical Analysis_. When I get the chance, I will confirm this and add an explicit attribution.

    Comment by Pete L. Clark | October 14, 2008 | Reply

  3. Oh I’m sure there’s slicker ways in. This one gets there, though, which is all I really need. My readers, though, are urged to read your PDF.

    Comment by John Armstrong | October 14, 2008 | Reply

  4. Upon examination, it seems a lot of the slickness is wrapped up in making explicit my hand-waving about “if you’re asymptotically approaching a value, your derivative must also be asymptotically approaching zero, as (thus) must its derivative.” Or consequences thereof.

    Comment by John Armstrong | October 14, 2008 | Reply

  5. “while the geometric definition hinges on some very deep assumptions on the geometry of spacetime”

    That’s twice in one day that you’ve baffled me, John! I think there’s probably some missing context (related to this ranting of yours somewhere else) that might explain this statement, but to me the “geometric definition” of \pi would be the one I learned in the third grade (ratio of circumference to diameter). I thought you were going to address Blaise’s question and demonstrate the mathematical equivalence between this definition and the definition you’ve given in this post. The geometric definition can be mathematically formalized, and there is a formal proof of the equivalence, and it has nothing to do with deep assumptions about “geometry of spacetime” in any physical sense. So, what gives?

    Comment by Todd Trimble | October 14, 2008 | Reply

  6. The ratio only has that value in flat spacetime. If there’s any curvature around, you’re probably not going to get that ratio.

    The point is that any mathematician working anywhere will get the same definition of \pi. Including, for example, those in Greg Egan’s new book.

    The equivalence I can’t finish off properly until I actually do arc-length, but you should be able to finish it from here.

    Comment by John Armstrong | October 14, 2008 | Reply

  7. Yes, John, obviously I’m working with the circle as a subset of the Euclidean plane, using standard arc-length, etc. And so was Blaise I expect. This is formal mathematics; physical referents are quite beside the point in establishing the mathematical equivalence. (And yes, “teach”, I can finish it off from here, thank you very much! 😀 )

    Comment by Todd Trimble | October 15, 2008 | Reply

  8. […] solve certain differential equations, which led us to defining the functions sine and cosine. Then I showed that the sine function must have a least positive zero, which we define to be […]

    Pingback by Pi: A Wrap-Up « The Unapologetic Mathematician | October 16, 2008 | Reply

  9. OK, let’s see if I got this, for the final bit…

    Given a curve (x(t), y(t)), the length ds of the curve from t to t+dt is \sqrt{(dx)^2 + (dy)^2}. So the total length of the curve from {0} to u is \int_0^u\sqrt{(\frac{dx}{dt})^2 + (\frac{dy}{dt})^2}dt. Applying this to the curve (cos(t), sin(t)) (as defined by the differential equation), we get s = \int_0^u\sqrt{(\frac{dcos(t)}{dt})^2 + (\frac{dsin(t)}{dt})^2}dt = \int_0^u\sqrt{(-sin(t))^2 + (cos(t))^2}dt = \int_0^u\sqrt{sin(t)^2 + cos(t)^2}dt \int_0^u\sqrt{1}dt = u.

    So the length of the circular arc from (1,0) to (cos(u), sin(u) is u. To go from u=0 to u=2\pi, you measure the whole circumference as 2\pi, which ties it back to the “classical” definition of \pi.

    Did I finish it off right?

    Comment by Blaise Pascal | October 16, 2008 | Reply

  10. Yeah, though I’ll do that again myself when I actually do arc-lengths.

    Which will be when I get back into multivariable calculus.

    Which will be after I get back to linear algebra.

    See, I was going to do linear algebra, then multivariable, then more of these series, because I had to wave my hands at points to do the interchange of summation of double-series, which is really an example of Fubini’s theorem in the case of Riemann-Stieltjes double-integrals. But now I’ve done these and I’ve got sine and cosine and the exponential to work with, so I can use those in my coverage of linear algebra. So it all works out in the end.

    Comment by John Armstrong | October 16, 2008 | Reply

  11. […] range is exactly that of the cosine function. Let’s consider the cosine restricted to the interval , where it’s injective. […]

    Pingback by Inner Products and Angles « The Unapologetic Mathematician | April 17, 2009 | Reply

  12. […] the famous trigonometric identity. That is, every complex number of the form lies a unit distance from the complex number […]

    Pingback by Complex Numbers and the Unit Circle « The Unapologetic Mathematician | May 26, 2009 | Reply


Leave a reply to Inner Products and Angles « The Unapologetic Mathematician Cancel reply