Computation

While it is likely true that mathematics as a whole is a language, I think it's fair to say that a majority of that language exists in order to solve problems. Historically, many of the techniques we now think of as purely mathematical had their origin in what we would call applied problems. These days it is hard to imagine any field of applied mathematics without some element of computation. Computers help us predict weather, optimize industrial processes, predict future economic trends and process the mountains of data our many measuring devices create. Still, while we are taught the definition of a derivative, and the rudiments of programming languages, the essence of numerical computation is not often spelled out in our courses.

It is an almost universal observation that mathematics is hard: equations tough, if not impossible, to solve, relationships often clouded by complex and counter-intuitive definitions, and so on. As frustrating as this is, it hardly seems a surprise. Our mathematical toolbox, after all, is kind of small. Think of functions of one variable as an example. You have polynomials, rational functions, the trigonometric functions and the exponentials, and that's about it. With appropriate restrictions you also have the various inverses of the above-said functions. While this is not nothing, it hardly seems to match up to the incredible array of physical phenomena we could hope to describe. But wait, first year of calculus offers an enticing way out via Taylor's theorem, which tells us that, locally at least, we can approximate ANY reasonably well behaved function by a polynomial. Indeed for a given approximation Taylor's theorem also gives us bounds on the error. Thus perhaps we do not need to come up with exotic functions to describe measured data, we could build a description out of polynomials instead. Notice that this is sort of strange, we assume there is a function to describe a phenomena in question, but we do not try to find it. Instead we find an approximation to this function. Of course, we do this out of necessity.

Even if we accept the principle of this argument, the utility of the discovery made in the last paragraph depends crucially on how efficiently we can find the approximation in question. It is the ready availability of powerful computers that proves to be the key. Let's consider a simple ordinary differential equation y'(t)=F(y,t). You've probably learned to solve various special cases of this equation by finding the formula for y(t). For example, if F(y,t)=-y, you get a decaying exponential as a solution. If you allow higher order derivatives you can get such staples of classical physics as Newton's Second Law and the Simple Harmonic Oscillator equation. These form the backbone of the deterministic description of matter.

But there is another way to proceed, instead of using the limit definition of derivative we could approximate it like:

y'(t) ~ y(t+h) -y(t) /h  ,where h is a small number (say 0.00000001, for concreteness). Now if we took the right side top be F(y(t),t) then we could write a recipe for finding y(t+h) given that we know y(t), namely

y(t+h) = y(t) + hF(y,t)

This is called Euler's method for solving differential equations numerically. It isn't much of an advance for single differential equations, but for systems of several, or even many, differential equations it is a viable option. The key to its utility is that its accuracy can be improved in two ways:
1) By making h smaller
2) By being more careful about how we do the approximation, or essentially what order Taylor polynomial we want to use.
Instead of a formula for the solution, Euler's method gives us a vector of values for y(.) at various times. In the case of many different equations a rather large amount of data is produced. What to do with that data will be discussed below.

First, however, let's try to think of a slight modification of the simple equation above that is thought to model the real world on small scales a bit better. It has been known since the 1820s that small particles experience an irregular motion (Brownian motion). It took another 90 years or so, before Einstein, Langevin and others linked the observations to the agitation by the myriad molecular collisions in any instant of time. To model a one dimensional version of this phenomenon, we split up F(y(t),t) so that it now has one part that depends on y(.) and t in the usual, deterministic way, and a second part that represents the random kicks due to the unceasing motion of all molecules. Here is a particular example: y(t+h) = y(t) +E(t) where E is used to represent the random kicks from the millions upon millions of molecular collisions that take place every instant. As it stands the equation is actually nonsense. This is because it is unclear whether the function that would be a "solution" would actually have derivatives in the proper mathematical sense. However we could try a slightly different approach. Consider the Euler's method version of the above equation:

y(t+h) = y(t) + h(-y(t)) + g(h)E(t)

The only twist is that I used g(h) in front of the random bit to give us a bit of wiggle room. Notice that the above is in recipe form again, namely if we know everything at time t, and if we can generate an appropriate random number (this can get subtle, but we assume it can be done), then we get the function value at t+h.

Now how to choose g(h)? You can probably guess that g(h)=h won't work. Indeed there is a rather complex mathematical theory which explains why we should choose the square root of h instead, however for now let's argue as follows: For ordinary functions we multiply by h, where h is a very small increment of time. However since the random kicks are assumed to happen very fast and very often, so in a time interval of length h, we will get many random kicks. It is generally assumed that one is just as likely to get negative kicks as positive ones. This means that whatever g(h) is, we expect it to be reduced in magnitude by all the random kicks. If the random kicks are drawn from a Gaussian, or Normal distribution, then g(h) = sqrt(h) will be reduced by the net effect of the random kicks to be of order h.

Thus we have a recipe for the evolution of this new, non deterministic system. It is worth noting that even if we start at the same place at t=0, say y(0)=1, every time we apply our recipe we get a different path. However, this does not mean that the resulting paths are completely irregular, see the following figure showing three paths:

three patha

Of course as the strength of the randomness decreases we must have the paths tend closer and closer to the solution of the deterministic part of the equation.

three signs

The following animations illustrate the path of the ball without- and with- randomness. Animation a) shows the path a ball follows according to Classical Mechanics ( green line on the graph above). On the other hand, with the random kicks that happen very fast, a ball will follow a different path illustrated with an animation b).

a)

No randomness (animated gif)

b)

With randomness (animated gif)

The equations discussed above are really just toys. The description of any situation more closely tied to reality must inevitably include more than one independent variable (potentially all three space variable and one time variable). You can take it as given that the recipes discussed above can be extended to more complicated situations (this is discussed in course like CM 342), but even given we can solve these problems, what do we do with all the data that we produce?

This is the second way in which computers have revolutionized the practice of applied mathematics, namely by adding a visual element. This web page, and many of the web pages we provide links to, are full of pictures, both still and moving. Some are "fake" photos of experiments carried out only in cyberspace, while others (space-time plots used to track waves, for example), are tailored to convey information visually, even if they are almost completely abstract. At times (think of a shaded contour plot, like the one below) the generation of the image requires further mathematical techniques.

contour

Of course the information presented visually does not have to come from a model. Indeed the remote sensing of the Earth's atmosphere and oceans from space would be a largely fruitless exercise were it not for dedicated languages (Like NCL from NCAR, and FERRET from NOAA) designed to interpret the data delivered by satellites into visual images. These images, in turn, allow us to understand the patterns that dominate our atmosphere and oceans, that are not perceptible for creatures that stand around two meters tall and that are confined to the planet's surface.