Saturday, January 8, 2011

Making Things Harder, Digitally

For the last week, I've been subbing a high school physics class and teaching about projectile motion (that's determining how far something will fly after you launch it into the air--think Monty Python and the Holy Grail, when the French catapult a dead cow out of the castle at the knights). There's a lot of math in this (as there is in all physics), and I found I had a fair number of students who were having trouble not with the physics but with their calculators.

When I took physics, way back in the Dark Ages (that is, when determining how far you could catapult a dead cow was still relevant), we used pencil, paper and this amazing device called a slide rule, which could multiply, divide, take square roots, do sines, cosines, tangents, and so forth, all without pressing buttons or discovering the batteries had died on the second question of the final exam. But the slide rule (and the first generation of electronic calculators) could only do one thing at a time, so you had to do the operations in order and sometimes write down partial results that you'd use later.

Now we have hand-held calculators with big screens and more computing power than was in the high-end scientific computer I used in college. So you can enter a whole long expression, as a mathematical expression, and just hit the "=" button to get your result.

This should be easier, but it turns out not to be. The reason is that the calculator, like the computer I used in college, has a keyboard. And a keyboard is, by nature, a one-dimensional thing: one letter (or digit, or symbol) follows another. Mathematical notation, on the blackboard or on the sheet of paper, is two dimensional. Look at this example, finding the time it takes an object to fall 14.4 meters:


You need but one set of parentheses to note that the 14.4 is negative (heading downward). The fact that the numerator has multiple terms is implied by its being above the line, and the extended line atop the square root symbol conveys that you first do all the multiplication and division, and then take the root. And the value of the second dimension only becomes more apparent when you start using superscripts (for squares, cubes, etc.) and subscripts... and when you start nesting more complex expressions within each other.

If you do this computation with a slide rule or primitive calculator, you start by noticing that the minus signs in numerator and denominator cancel out, which means you can forget them. Next you multiply 14.4 times 2, then divide by 9.8, and finally take the square root.

But if you're going to do this the "easy" way, letting the calculator do all the work, you've got to translate this two-dimensional expression into a line of characters on the display. You might do what several of the students did, and enter it like this:


Notice you've already had to add one more set of parentheses, to tell the calculator that the denominator is negative and you're not subtracting. But even so, this won't give you the right answer, as the calculator sees the square root symbol and assumes you just want to take the square root of two and multiply it by the stuff that follows (because square root is a higher priority operation than multiplying, and anyway, it's at the beginning of the line). So you have to add some more parentheses to make sure you get the right answer:

So now, an expression that required only one set of parentheses in two-dimensional math notation requires three sets to properly convey your intent to the calculator. And, if you left out the extra parentheses the first time, you've just distracted yourself from learning physics (which is the whole intent of taking the course; that's why it's called "Physics") and are now for all practical purposes debugging a FORTRAN program. Welcome to 1973.

In short, the calculator, billed as a device to make your work easier, has in fact made it harder--because each time you enter one of these expressions (which, I repeat, was the thing you were actually trying to learn in your physics class) you must translate it into a form the calculator understands. You must now be both a physicist and a programmer. And an unpaid one, at that.

We have touch screens and tablet interfaces. We have lots and lots of computing power available. We have the ability to play Tetris on the calculator if the lecture seems boring. Why don't we have the ability to just scribble the expression, in traditional mathematical notation, on the calculator's screen? Why must we still translate it to a fifties-vintage programming language?

No comments:

Post a Comment