The Unapologetic Mathematician

Mathematics for the interested outsider

The Cross Product and Pseudovectors

Finally we can get to something that is presented to students in multivariable calculus and physics classes as if it were a basic operation: the cross product of three-dimensional vectors. This only works out because the Hodge star defines an isomorphism from A^2(V) to V when \dim(V)=3. We define

u\times v=*(u\wedge v)

All the usual properties of the cross product are really properties of the wedge product combined with the Hodge star. Geometrically, u\times v is defined as a vector perpendicular to the plane spanned by u and v, which is exactly what the Hodge star produces. We choose which perpendicular direction by the “right-hand rule”, but this is only because we choose the basis vectors e_1, e_2, and e_3 (or as these classes often call them: \hat{\imath}, \hat{\jmath}, and \hat{k}) by the same convention, and this defines an orientation we have to stick with when we define the Hodge star. The length of the cross product is the area of the parallelogram spanned by u and v, again as expected from the Hodge star. Algebraically, the cross product is anticommutative and linear in each variable. These are properties of the wedge product, and the Hodge star — being linear — preserves them.

The biggest fib we tell students is that the value of the cross product is a vector. It certainly looks like a vector on the surface, but the problem is that it doesn’t transform like a vector. Before the advent of thinking of all these things geometrically, people thought of a vector quantity as a triple of real numbers that transform in a certain way when we change to a different orthonormal basis. This is inspired by the physical world, where there’s no magic orthonormal basis floating out somewhere to pick out coordinates. We should be able to turn our heads and translate the laws of physics to compensate exactly. These rotations form the special orthogonal group of orientation- and inner product-preserving transformations, but we can also throw in reflections to get the whole orthogonal group, of all transformations from one orthonormal basis to another.

So let’s imagine what happens to a cross product when we reflect the world. In fact, stand by a mirror and hold out your right hand in the familiar way, with your index finger along one imagined vector u, your middle finger along another vector v, and your thumb pointing in the direction of the cross product u\times v. Now look in the mirror.

The orientation has been reversed, and mirror-you is holding out its left hand! If mirror-you tried to use its version of the cross product, it would find that the cross product should go in the other direction. The cross product doesn’t behave like all the other vectors in the world, because it doesn’t reflect the same way.

Physicists to this day use the old language describing a triple of real numbers that transform like a vector under rotations, but point the wrong way under reflections. They call such a quantity a “pseudovector”. And they also have a word for a single real number that somehow mysteriously flips its sign when we apply a reflection: a “pseudoscalar”. Whenever we read about scalar, vector, pseudovector, and pseudoscalar quantities, they just mean real numbers (or triples of them) and specify how they change under certain orthogonal transformations.

But geometrically we can see exactly what’s going on. These are just the spaces A^0(V)=\mathbb{R}, A^1(V)=v, A^2(V), and A^3(V), along with their representations of the orthogonal group \mathrm{O}(V). And the “pseudo” means we’ve used the Hodge star — which depends essentially on a choice of orientation — to pretend that bivectors in A^2(V) and trivectors in A^3(V) are just like vectors in V and scalars in \mathbb{R}, respectively. And we can get away with it for a long time, until a mirror shows up.

The only essential tool from multivariable calculus or introductory physics built from the cross product that we might have need of is the “triple scalar product”, which takes three vectors u, v, and w. It calculates the cross product v\times w of two of them, and then the inner product \langle u,v\times w\rangle=\langle u,*(v\wedge w)\rangle with the third to get a scalar. But this is the coefficient of our unit cube \omega in the definition of the Hodge star:

\displaystyle\langle u,*(v\wedge w)\rangle\omega=u\wedge**(v\wedge w)=u\wedge v\wedge w

since **(v\wedge w)=(-1)^{2\cdot(3-2)}v\wedge w. That is, the triple scalar product gives the (oriented) volume of the parallelepiped spanned by u, v, and w, just as we remember from those classes. We really don’t need the cross product as a primitive operation at all, and in the long run it only leads to confusion as it identifies vectors and pseudovectors without the explicit use of the orientation-dependent Hodge star to keep us straight.

November 10, 2009 Posted by | rants | 3 Comments

Euclidean Spaces

In light of our discussion of differentials, I want to make a point here that is usually glossed over in most treatments of multivariable calculus. In a very real sense, the sources and targets of our functions are not the vector spaces \mathbb{R}^n.

Let’s think about what we need to have a vector space. We need a way to add vectors and to multiply them by scalars. Geometrically, addition proceeds by placing vectors as arrows “tip-to-tail” and filling in the third side of the triangle. Scalar multiplication takes a vector as an arrow and stretches, shrinks, or reverses it depending on the value of the scalar. But both of these require us to think of a vector as an arrow which points from the origin to the point with coordinates given by the components of our vector.

But this makes the origin a very special point indeed. And why should we have any such special point, from a geometric perspective? We already insisted that we didn’t want to choose a basis for our space that would make some directions more special than others, so why should we have to choose a special point?

What really matters in our spaces is their topology. But we don’t want to forget all of the algebraic structure either. There are still some vestiges of the structure of a vector space that still make sense in the absence of an origin. Indeed, we can still talk about it as an affine space, where the idea of displacement vectors between points still makes sense. And these displacement vectors will be actual vectors in \mathbb{R}^n. Like any torsor, this means that our space “looks like” the group (here, vector space) we use to describe displacements, but we’ve “forgotten” which point was the origin. We call the result a “Euclidean” space, since such spaces provide nice models of the axioms of Euclidean geometry.

So let’s try to be a little explicit here: we actually have two different kinds of geometric objects floating around right now. First are the points in an n-dimensional Euclidean space. We can’t add these points, or multiply them by scalars, but we can find a displacement vector between two of them. Such a displacement vector will be in the n-dimensional real vector space \mathbb{R}^n. When it’s convenient to speak in terms of coordinates, we first pick an (arbitrary) origin point. Now if we’re sloppy we can identify a point in the Euclidean space with its displacement vector from the origin, and thus confound the Euclidean space of points and the vector space of displacements. We can proceed to choose a basis of our vector space of displacements, which gives coordinates to the Euclidean space of points; the point (x^1,\dots,x^n) is the one whose displacement vector from the origin is x^ie_i.

Now, the rant. Some multivariable calculus books are careful about not doing nonsense things like “adding” or “scalar multiplying” points, but many do exactly these sorts of things, giving the impression to students that points are vectors. Even among the texts that are careful, I don’t recall seeing any that actually go so far to mention that a point is not a vector. When I teach the course I’m careful to point out that they’re not quite the same thing (though not in quite as much detail as this) and I go so far as to write them differently, with vector coordinates written out between angle brackets instead of parens. Without some sort of distinction being explicitly drawn between points and vectors, more students do fall into the belief that the two are the same thing, or (worse) that each is “the same thing as” a list of numbers in a coordinate representation. Within the context of a course on multivariable calculus, it’s possible to get by with these ideas, but in the long run they will have to be corrected before proceeding into more general contexts.

So, why bring this up now in particular? Because it explains the notation we use in the differential. When we write df(x;t), the semicolon distinguishes between the point variable and the vector variable. It becomes even more apparent when we choose coordinates and write df(x^1,\dots,x^n;t^1,\dots,t^n). Notice that we only ask that df act linearly on the vector variable, since “linear transformations” are defined on vector spaces, not Euclidean spaces.

September 28, 2009 Posted by | Analysis, Calculus, rants, Topology | 3 Comments

Multivariable Limits

As we’ve seen, when our target is a higher-dimensional real space continuity is the same as continuity in each component. But what about when the source is such a space? It turns out that it’s not quite so simple.

One thing, at least, is unchanged. We can still say that f:\mathbb{R}^m\rightarrow\mathbb{R}^n is continuous at a point a\in\mathbb{R}^m if \lim\limits_{x\to a}f(x)=f(a). That is, if we have a sequence \left\{a_i\right\}_{i=0}^n of points in \mathbb{R}^m (we only need to consider sequences because metric spaces are sequential) that converges to a, then the image of this sequence \left\{f(a_i)\right\}_{i=0}^n converges to f(a).

The problem is that limits themselves in higher-dimensional real spaces become a little hairy. In \mathbb{R} there’s really only two directions along which a sequence can converge to a given point. If we have a sequence converging from the right and another sequence converging from the left, that basically is enough to establish what the limit of the function is (and if it has one). In higher-dimensional spaces — even just in \mathbb{R}^2 — we have so many possible approaches to any given point that in order to avoid an infinite amount of work we have to use something like the formal definition of limits in terms of metric balls. That is

The function f:\mathbb{R}^n\rightarrow\mathbb{R} has limit L at the point a if for every \epsilon>0 there is a \delta>0 so that \delta>\lVert x-a\rVert>0 implies \lvert f(x)-L\rvert<\epsilon.

We just consider the case with target \mathbb{R} since higher-dimensional targets are just like multiple copies of this same definition, just as we saw for continuity.

Now, let’s look at a few examples of limits to get an idea for why it’s not so simple. In each case, we will be considering a function f:\mathbb{R}^2\rightarrow\mathbb{R} which is bounded near \left(0,0\right) (since just blowing up to infinity would be too easy to be really pathological) and even with nice limits along certain specified approaches, but which still fail to have a limit at the origin.

First off, let’s consider \displaystyle f(x,y)=\frac{x^2-y^2}{x^2+y^2}. If we consider approaching along the x-axis with the sequence a_n=\left(\frac{1}{n},0\right) or a_n=\left(-\frac{1}{n},0\right) we find a limit of {1}. However, if we approach along the y-axis with the sequence a_n=\left(0,\frac{1}{n}\right) or a_n=\left(0,-\frac{1}{n}\right) we instead find a limit of -1. Thus no limit exists for the function.

Next let’s try \displaystyle f(x,y)=\frac{x^4-6x^2y^2+y^4}{x^4+2x^2y^2+y^4}. Now the approaches along either axis above all give the limit {1}, so the limit of the function is {1}, right? Wrong! This time if we approach along the diagonal y=x with the sequence a_n=\left(\frac{1}{n},\frac{1}{n}\right) we get the limit -1. So we have to consider directions other than the coordinate axes.

What about \displaystyle f(x,y)=\frac{x^2y}{x^4+y^2}? Approaching along the coordinate axes we get a limit of {0}. Approaching along any diagonal y=mx with the sequence a_n=\left(\frac{1}{n},\frac{m}{n}\right) the calculations are a bit hairier but we still find a limit of {0}. So approaching from any direction we get the same limit, making the limit of the function {0}, right? Wrong again! Now if we approach along the parabola y=x^2 with the sequence a_n=\left(\frac{1}{n},\frac{1}{n^2}\right) we find a limit of \frac{1}{2}, and so the limit still doesn’t exist. By this point it should be clear that if straight lines aren’t enough to simplify things then there are just far too many curves to consider, and we need some other method to establish a limit, which is where the metric ball definition comes in.

Now I want to go off on a little bit of a rant here. It’s become fashionable to not teach the metric ball definition — \epsilon\delta proofs, as they’re often called — at the first semester calculus level. It’s not even on the Calculus AB exam. I’m not sure when this happened because I was taught them first thing when I took calculus, and it wasn’t that long between then and my first experience teaching calculus. But it’d have to have been sometime in the mid-’90s. Anyway, they don’t even teach it in most college courses anymore. And for the purposes of calculus that’s okay, since as I mentioned above you can easily get away without them when dealing with single-variable functions. They can even survive the analogues of \epsilon\delta proofs that come up when dealing with convergent sequences in second-semester calculus.

The problem comes when students get to third semester calculus and multivariable functions. Now, as we’ve just seen, there’s no sure way of establishing a limit. We can in some cases establish the continuity of simple functions (like coordinate projections) and then use limit laws to build up a larger class. But this approach fails for functions superficially similar to the pathological functions listed above, but which do have limits which can be established by an \epsilon\delta proof. We can establish that certain limits do not exist by techniques similar to those above, but this requires some ingenuity in choosing two appropriate paths which give different results. There are one or two other methods that work in special cases, but nothing works like an \epsilon\delta proof.

But now we can’t teach \epsilon\delta proofs to these students! The method is rather more complicated when we’ve got more than one variable to work with, not least because of the more complicated distance formula to work with. What used to happen was that students would have developed some facility with \epsilon\delta proofs back in first and second semester calculus, which could then be brought to bear on this new situation. But now they have no background and cannot, in general, absorb both the logical details of challenge-response \epsilon\delta proofs and the complications of multiple variables at the same time. And so we show them a few jury-rigged tricks and assure them that within the rest of the course they won’t have to worry about it. I’d almost rather dispense with limits entirely than present this Frankenstein’s monstrosity.

And yet, I see no sign that the tide will ever turn back. The only hope is that the movement to make statistics the capstone high-school course will gain momentum. If we can finally wrest first-semester calculus from the hands of the public school system and put all calculus students at a given college through the same three-semester track, then the more intellectually rigorous institutions might have the integrity to put proper limits back into the hands of their first semester students and not have to worry about incoming freshmen with high AP scores covering for shoddy backgrounds.

September 17, 2009 Posted by | Point-Set Topology, rants, Topology | 29 Comments

It’s that time again

I’ve made my opinion clear about today. It’s completely based on two accidents. One is the use of decimal notation, and one is the use of the Gregorian calendar. And it reduces mathematics to a caricature, with no real understanding even of its referent.

And now even Rachel is getting in on it, which I figured she probably would. Okay, so Rachel, I’ve got a deal for you: if non-public-policy geekdom is actually of interest to you beyond “One More Thing” fodder, I’d be glad to come on the show next year and explain why these celebrations are actually detrimental. I’ll be waiting for your email.

March 14, 2009 Posted by | rants | 9 Comments

Pi: A Wrap-Up

A couple months ago, in a post on World Series odds (how are those working out, Michael?), a commenter by the moniker of Kurt Osis asked a random question:

Ok now to my random question for the day. Is all human knowledge based on Pi? This just occurred to me the other day, if knowledge is based on measurement and the only objective form of measurement we have is the ratio between a circle’s circumference and diameter then is all knowledge really based on Pi?

Naturally, this sounds like just the sort of woo that I’ve decried in The Tao of Physics and The Dancing Wu Li Masters. It also smacks of “mathing up” the fuzzy ideas to give them the veneer of rigor and respectability. I’ve seen politicians do it, we’ve all seen poststructuralists do it, and there’s a lot of others that do too. And one of the very few undeniably mathy words that almost everyone knows is that blasted Greek constant \pi, so it gets called into service a lot.

Clearly, I had to nip this in the bud.

I pointed out that this idea of wrapping things up with “measurement” really gave away that this was nonsense. I cited that curvature of spacetime throws off exactly such measurements (a point I recently brought up with Todd, but he hadn’t thrown “measurement” out there himself). At that point Kurt backtracked and said that \pi was an idealization, and the measured discrepancies were knowledge. Of course I had forgotten about how slippery arguments can be with someone who only cares for the veneer of rigor.

Still I pressed onwards. I pointed out that \pi has nothing to do with any real, physical measurement. The Cabibbo angle, or the fine-structure constant — those are the real-world constants that are actually interesting because there is (as yet) no reason why they have to have the values that they do.

Then the discussion moved from an unrelated post on Michael’s weblog to an unrelated post on mine.

Again, Kurt advances the “epistemic \pi” hypothesis as if it’s remotely coherent. Now he asserts that he was “trying to think of something independent of the number system itself”, and finally I had something. Here I made my stand:

\pi is far from independent of the number system. It is what it is exactly because of the way the real number system is structured.

Then and there I decided to stop what I was working on about linear algebra. Instead, I set off on power series and how power series expansions can be used to express analytic functions. Then I showed how power series can be used to solve certain differential equations, which led us to defining the functions sine and cosine. Then I showed that the sine function must have a least positive zero, which we define to be \pi.

The assumptions that have led to the definition of \pi are just those of the real number system: we are working within the unique (up to isomorphism) largest archimedean field. There is no measurement, no knowledge, no science, and no epistemology to it. Kurt’s real question — the one he hops onto other mathematical weblogs’ unrelated comment threads to ask — is really about philosophy. He’s asking for a final answer to the entire field of epistemic research. It’s not forthcoming; not on a math weblog, not on a philosophy weblog, not anywhere. It’s been around in its current form for hundreds of years, and I don’t see a resolution on the horizon. But it certainly doesn’t lie in an accidental quirk of the real number system that society has for some reason decided to exalt far beyond its true value.

October 16, 2008 Posted by | rants | 3 Comments

What’s Really Important

Last Monday I noticed an XKCD comic and then later deconstructed it. The upshot is that I didn’t like it, but many XKCD fans turned around to tell me that I was either stupid or crazy to question Randall’s artistic vision.

This Monday’s was up about ten minutes before Randall’s inbox flooded. And now we know what topics are important enough to voice disagreement over.

February 25, 2008 Posted by | rants | 12 Comments

Deconstructing XKCD

Okay, evidently I need to flex my Critical Theory muscles, atrophied from years of disuse, and bring them to bear on yesterday’s offhand remarks.

To recall, we’re looking at the XKCD comic from Monday, February 18. The title is given as “How It Works”. This is where the ambiguity begins. The phrase “how it works” can either indicate either an observational or a normative description. Observationally, we might catalogue the operations of a certain system. Here, those operations are the ways in which the system works — “how it works” as an entity isolated from the reader. Normatively, we might take our knowledge of a system and give instruction on proper interactions with the system. Such instructions tell how to achieve such results as the author finds worthy — “how it works” to the benefit of the reader, as seen by the author.

The ambiguity is important here because of the different connotations of the two readings. The observational reading is emotionally neutral with respect to its content. The system simply is, and the author renders an image of the system as it is, with no inherent judgement or commentary. On the other hand, the normative statement is inherently an endorsement. The author instructs the reader to interact thusly.

It should be noted here that with slight modifications, the observational mode can be turned to a critical mode. For example, instead of merely describing the human condition as he saw it, Nietzsche entitled his book, Menschliches, Allzumenschliches. In doing so, he explained that he was describing what it was to be “human-like”, but emphasized his disapproval by picking it out as “all too human-like” — something to be escaped rather than merely documented, let alone embraced.

Now, as to the content itself. The comic compares two nearly-identical situations. In each case, two people stand at a chalkboard. In each, the person on the right has just finished writing out the formula

\displaystyle\int x^2=\pi

on the board. In each, the person on the left comments in response. I will refer to the person on the right as the “Writer”, and the person on the left as the “Speaker”.

First, let’s dismiss the details of the mathematical fact. Others have pointed out various flaws. The alt text for the image correctly lists one such possibility — the writers have omitted constants of integration. Another problem is that each image omits the “dx” from the integral. However, these details are actually immaterial to the setup. The expression is not meant as mathematics itself, but as an icon representing mathematics. That is, it acts as a symbol meaning “mathematical work containing a glaring error”. However, the presence of an integral sign picks out the level of the signified work: basic calculus.

In each situation, the Speaker is drawn identically, and generically. The identity is clearly meant to suggest that the two speakers are actually the same person, reacting to two slightly different situations. The difference is all in the Writer.

The author’s style is for “stick figures” with a minimum of recognizable features. However, there are a few standards to his iconography. Most important here is that almost all of the characters are bald, with the exception of a female character. These look identical to the male characters, except inasmuch as they have hair.

The difference in the situations is clearly that in the first, the Writer is male, while in the second the Writer is female. And thus the Speaker’s different reaction is solely a result of the difference in the Writer’s sex. In the first situation, the Speaker says, “Wow, you suck at math.” In the second, he says, “Wow, girls suck at math.”

So we have a significant error in calculus-level mathematics. Nothing about the Writer suggests to the Speaker that this is a one-time error by a normally-competent person. The reaction is not “that’s a mistake”, but “you suck” in both situations, indicating the glaring nature of the error. The Writer, it may be assumed, actually is bad at mathematics.

But then why is someone bad at math at the board anyhow. People with little mathematical skill don’t seek out public fora like chalkboards without provocation. If they must do calculus, it will be hidden on paper, so the numerous false starts and errors can be easily swept under the rug. This identifies the Writer as a student, and the Speaker as someone with enough sway over the Writer to force an appearance at the chalkboard — likely an instructor. Since the Writer is a student with mediocre mathematical ability, it is unlikely that the instruction is taking place in a high school setting. Far more likely, this is at a college, where calculus is often a general requirement.

But despite cultural assumptions, college calculus instructors generally don’t hold their individual students in contempt. We complain about students en masse, but each individual student is to be helped to understand the material. Yes, some instructors don’t fit this mold, but if we are to adopt the observational mode with respect to this comic, we must understand it as speaking generically. There are no identifying features about the Writer or the Speaker (other than the Writer’s sex), and so we cannot understand either of them to be established characters. They are generic placeholders, filling roles to be determined (as we have above) from the context.

And so each situation — with a male Writer or a female — rings false when interpreted generically, as an observation. And yet our prior knowledge of the author tells us that he can’t be meaning this normatively. We are left unsatisfied, with an awkward, ill-contextualized comic. However, if we did not have prior knowledge of the author (as many readers may not) then the awkward contextualization provides reason to read the comic normatively. Either way, the work surely fails to achieve its goals.

February 20, 2008 Posted by | rants | 36 Comments

XKCD… WTF?

Okay, usually I’m all behind XKCD, but today’s installment is a bit of a head-scratcher.

The title seems especially ill-chosen. I mean, I know that Randall’s not a doctor of linguistics, but he’s usually pretty on the ball. Clearly he can’t mean the title as a normative statement, but he also has to understand that “how it works” will commonly parse as “how it should work”. The fact that there’s no comeuppance for the jerk doesn’t help here. Without further comment, it’s easy to read the comic as an endorsement of this attitude.

The other thing that leaves a bad taste in my mouth is that the guy on the left is not a clearly-defined character we all know to be unpleasant already. Yes, I know this is arguing semiotics, but there’s a reason Goofus and Gallant comics are so easily read: a generic character will be interpreted as a generic person. Their behavior is then also taken as generic. Putting the Hat Guy in there would go a long way towards making this not seem like an endorsement.

And then the details are off. The characters are looking at a calculus problem. I don’t know anyone — at least any instructor — in this day and age who thinks like this at the calculus level. As far as I know, the psychological damage is usually done by this point. The attitude comes in during grade-school, so an arithmetic problem (and younger characters at the board) would be more appropriate. That is, unless Randall is asserting that this attitude is endemic (remember generic character => generic person) among calculus instructors.

In that case I really have to disagree with him on the strongest possible terms. But again, there’s no further comment, and the whole thing just feels disappointing as a result.

February 18, 2008 Posted by | rants | 42 Comments

Chalk is a “Feelie”

Okay, so I’ll pile on with the interactive fiction chatter. I really should, since I’ve been playing IF games since I was a wee lad.

First someone pointed out that a long calculation is like a computer game where you have to save and keep backtracking to your saved states. Isabel at God Plays Dice then drew the more specific connection to interactive fiction. Then Mark at Inductio Ex Machina contributed this sample transcript of such a “game”.

I’d like to point out an unintended analogy here. It’s pretty well accepted within the IF community that any puzzle should be solvable at a first pass. That is, if you’ve done everything right you’ll have all you need to solve a given puzzle without guessing, failing, and backtracking. In fact, it’s the height of bad writing to include a puzzle that requires you to attempt it and fail to gain information needed to pass.

I think that the same holds true in mathematics. If you find that you must do a hard calculation with attendant backtracking, you’re asking the wrong question. When properly viewed, the solution to any problem should be inherent in the problem itself. Of course, it might be more convenient in context to bash your head against a wall than to look for the hidden doorway, but it’s really not the best way to go about things in the long run. I come back to my favorite passage from Grothendieck’s Récoltes et Semailles.

Prenons par exemple la tâche de démontrer un théorème qui reste hypothétique (à quoi, pour certains, semblerait se réduire le travail mathématique). Je vois deux approches extrêmes pour s’y prendre. L’une est celle du marteau et du burin, quand le problème posé est vu comme une grosse noix, dure et lisse, dont il s’agit d’atteindre l’intérieur, la chair nourricière protégée par la coque. Le principe est simple: on pose le tranchant du burin contre la coque, et on tape fort. Au besoin, on recommence en plusieurs endroits différents, jusqu’à ce que la coque se casse — et on est content. Cette approche est surtout tentante quand la coque présente des aspérités ou protubérances, par où “la prendre”. Dans certains cas, de tels “bouts” par où prendre la noix sautent aux yeux, dans d’autres cas, il faut la retourner attentivement dans tous les sens, la prospecter avec soin, avant de trouver un point d’attaque. Le cas le plus difficile est celui où la coque est d’une rotondité et d’une dureté parfaite et uniforme. On a beau taper fort, le tranchant du burin patine et égratigne à peine la surface — on finit par se lasser à la tâche. Parfois quand même on finit par y arriver, à force de muscle et d’endurance.

Je pourrais illustrer la deuxième approche, en gardant l’image de la noix qu’il s’agit d’ouvrir. La première parabole qui m’est venue à l’esprit tantôt, c’est qu’on plonge la noix dans un liquide émollient, de l’eau simplement pourquoi pas, de temps en temps on frotte pour qu’elle pénètre mieux, pour le reste on laisse faire le temps. La coque s’assouplit au fil des semaines et des mois — quand le temps est mûr, une pression de la main suffit, la coque s’ouvre comme celle d’un avocat mûr à point ! Ou encore, on laisse mûrir la noix sous le soleil et sous la pluie et peut-être aussi sous les gelées de l’hiver. Quand le temps est mûr c’est une pousse délicate sortie de la substantifique chair qui aura percé la coque, comme en se jouant — ou pour mieux dire, la coque se sera ouverte d’elle-même, pour lui laisser passage.

L’image qui m’était venue il y a quelques semaines était différente encore, la chose inconnue qu’il s’agit de connaître m’apparaissait comme quelque étendue de terre ou de marnes compactes, réticente à se laisser pénétrer. On peut s’y mettre avec des pioches ou des barres à mine ou même des marteaux-piqueurs: c’est la première approche, celle du “burin” (avec ou sans marteau). L’autre est celle de la mer. La mer s’avance insensiblement et sans bruit, rien ne semble se casser rien ne bouge l’eau est si loin on l’entend à peine… Pourtant elle finit par entourer la substance rétive, celle-ci peu à peu devient une presqu’île, puis une île, puis un îlot, qui finit par être submergé à son tour, comme s’il s’était finalement dissous dans l’océan s’étendant à perte de vue…

Le lecteur qui serait tant soit peu familier avec certains de mes travaux n’aura aucune difficulté à reconnaître lequel de ces deux modes d’approche est “le mien” — et j’ai eu occasion déjà dans la première partie de Récoltes et Semailles de m’expliquer à ce sujet, dans un contexte quelque peu différent. C’est “l’approche de la mer”, par submersion, absorption, dissolution — celle où, quand on n’est très attentif, rien ne semble se passer à aucun moment: chaque chose à chaque moment est si évidente, et surtout, si naturelle, qu’on se ferait presque scrupule souvent de la noter noir sur blanc, de peur d’avoir l’air de combiner, au lieu de taper sur un burin comme tout le monde… C’est pourtant là l’approche que je pratique d’instinct depuis mon jeune âge, sans avoir vraiment eu à l’apprendre jamais.

In case you haven’t yet passed your French language qualifier, I’ll give a rough translation.

Take, for example, the task of proving a theorem. I see two extreme approaches one could take. The first is that of hammer and chisel, wherein the problem posed is seen as a large nut, hard and smooth, which contains a nourishing meat protected by the shell. The principle is simple: one puts the edge of the chisel against the shell and hits it hard. If necessary, one tries again in many different places, until the shell cracks — and one is happy. This approach is especially appealing when the shell shows a rough or bumpy patch where it can be grasped. In some cases, such places to grab the nut jump to the eye. In other cases, one must use all one’s senses and search carefully before finding a point of attack. The most difficult case is that where the shell is perfectly round and evenly firm. When hit strongly, the edge of the chisel just scratches the surface — one ends up merely tired. Sometimes the nut will finally crack through mere strength and stamina.

I can illustrate the second approach with the same metaphor of a nut to be opened. The first explanation that comes to mind is to immerse the nut in some softening liquid — water, for instance — and to rub it from time to time to allow the water to penetrate better, but otherwise to leave it alone. Over weeks and months, the shell softens — when the time is right, a flick of the wrist is sufficient, and the shell opens to it like a ripe avocado! Or again, one can leave the nut out in the sun and the rain and even through the icy winter. When the time is right, it is a delicate touch that breaks the shell — or to say it better, the shell will open itself to let one through.

The pictur that came to me recently was again different. The unknown thing one is trying to undertand seems to me like a stretch of land or a hard patch of earth, hard to dig into. One might go at it with picks or mining tools, or even with jackhammers: this is the first approach, that of “chisels” (with or without hammer). The other is that of the sea. The sea advances imperceptibly and noiselessly. Nothing seems to break, nothing moves… Yet eventually it surrounds the land. It slowly becomes a peninsula, then an island, then an islet, and finally it is submerged completely, dissolved into the ocean which stretches as far as the eye can see…

The reader who is familiar with some of my work will have no difficuly determining which of these two approaches is “mine” — and I have had occasion already in the first part of these “Reapings and Sowings” to explain myself on this subject, in a slightly different context. It is “the method of the sea”, by submersion, absorption, dissolution — that where, if one does not pay close attention, nothing seems to happen at any given moment: everything is at each moment so evident and so natural that one feels nervous to write it down in black and white for fear of being others’ disapproval, rather than banging away at a chisel like everyone else… Yet this is the approach that I instinctively took since I was young, never having really noticed it.

As for the title of this post, a feelie is a physical object — often some document — that was packaged with a game and containing information crucial to some puzzle you’d need to solve. That is, if you didn’t buy the game and get the feelie, you couldn’t get past a certain point. It provided a crude level of copy-protection back in the good old days, under the pretense of extending the game experience (more common in non-IF games was asking the user to type in some specified word from the documentation). Thus, a feelie was all too often a hack — a puzzle relying on them was awkward and inelegant, pulling you out of the experience of the game rather than immersing you in it as was hoped.

Blackboards full of equations serve the same obscuring purpose. True understanding never lies in a calculation. The chalk on the board should not be a map, but a lens, and the mathematics is not in the equations, but behind them.

October 19, 2007 Posted by | rants | 5 Comments

Carnival?

There’s been considerable discussion, particularly in this thread on Michi’s blog about the Carnival of Mathematics.

If you’ve been here from the beginning, you know that I was a contributor to the CoM since its beginning. It’s a great idea, but the execution… well, as time went by it just had more and more to do with brainteasers and education and less and less to do with the meat of the mathematical matters.

It might have had something to do with handing it to a sequence of weblogs that are only tangentially mathematical in their mission, and particularly a streak of explicitly math-ed weblogs. It might just be that the vast majority of people reading and writing weblogs who think of themselves as knowing some math are really computer programmers, physicists, and engineers who use mathematics as a tool and only ever really see pure, unadulterated mathematics in the form of puzzles or tricks; or pre-college mathematics teachers who, by and large, do not spend any time thinking about mathematics that will not help their students learn the material rather than for its own sake.

And so I eventually stopped when for three postings in a row I stuck out like a sore thumb as the only contribution above the level of a sudoku.

Don’t get me wrong. All these lower-level non-technical posts are good, but I started to feel like the 50-year-old guy at a rave. By that point, nobody was coming to the Carnival to read about categorification. And this host couldn’t even spell the g—–n word despite my using it in my submission email, over and over in the linked post, and in the freaking title of the post. It was clear that I was the odd man out here, and that my submissions were only begrudgingly accepted with little care from the hosts.

I think that was the beginning of the end for me. The next fortnight I was in Faro, which gave me a good out-of-line post on Khovanov Homology, but since then I haven’t felt at all interested in writing anything outside my main expository line for Carnival submission. That next Friday came and went and I saw no difference in my hits. Just as I’d thought, nobody was coming from Carnival who wouldn’t have come anyway.

So here’s how I see it. The Carnival of Mathematics has become a de facto carnival of lower-level mathematics, brainteasers, and mathematics education. And I’m fine with that. I’m leaning towards letting it be and just starting a new carnival for actual mathematics. There are certainly many more mathematics weblogs than there were when CoM began, and they could support at least a monthly carnival on their own now. Or maybe this more academic community is inclined to disdain the carnival approach entirely.

Other people have suggested that there’s something to be gained by mixing the levels, and while I agree that something could be gained, I don’t think anything is being gained. People coming from the lower-level and dilettantish weblogs are not reading the higher-level material. And higher-level people can still read the Carnival posts and find what’s new in sudoku-land if they want, whether high-level blatherers submit to CoM or not.

But let’s be sort of scientific about this. A show of hands: who found The UM through a carnival post linked from a lower-level sometimes-mathematical weblog? Who found it through a comment I’d made on another weblog, or through a direct reference on another weblog? Who still finds upper-level weblogs through the Carnival? And what, specifically, do you think will be lost if weblogs like The UM, God Plays Dice, and the Seminars recognize the CoM‘s current state for what it is rather than what it could have been and move on with our lives and weblogs?

I’d like it if you leave a visible comment here, but if you’d prefer to email your correspondence to me privately you know I’m teaching at Tulane now…

August 18, 2007 Posted by | rants | 29 Comments

Follow

Get every new post delivered to your Inbox.

Join 388 other followers