Perhaps more than any other field of applied mathematics, continuum and fluid mechanics is intertwined with both the concept and practice of approximation. Of course any student of first year calculus learns about Taylor series as approximations to a function, and indeed local approximations by piecewise linear functions are an important part of numerical analysis. However, the idea of approximation from the point of view of continuum mechanics runs much deeper.
In some sense the very idea of simple mathematics-based laws to describe the deformation and flow of matter are an approximation, since matter is made up of countless numbers of molecules. It is hopeless to try to describe the behaviour of all the molecules in a piece of material, and so a simplified description is sought. The two disciplines of physics that bridge the gap between the microscopic and the length scales of the every day world are thermodynamics and statistical mechanics. It is the task of thermodynamics to describe the macroscopic, or en masse, effects of molecular motion, while statistical mechanics seeks to justify thermodynamic laws (such as heat flowing from hot to cold) from a more basic point of view. Notice that I am being careful not to say "more fundamental", since statistical mechanics requires drastic simplifying assumptions to carry out its work.
If
we
accept
this
first
level
of
approximation,
then
we
are
left
with
the
task
of
deriving
the
general
laws
of
flow
and
deformation.
This
is
the
task
of
continuum
mechanics
and
is
taken
up
in
the
course
AMATH
361.
At
this
point
we
can
take
a
sentence
or
two
to
describe
the
difficulties
one
would
expect
in
the
process
of
deriving
the
general
laws.
First
of
all,
a
sample
of
material
occupies
a
region
of
three
dimensional
space.
This
means
that
the
functions
(often
called
field
variables)
we
are
trying
to
solve
for
will
have
to
be
functions
of
up
to
three
space
variables,
and
possibly
time
as
well.
Typical
examples
of
variables
of
interest
are:
density,
pressure,
temperature,
chemical
concentration
(such
as
the
salinity
of
the
ocean).
All
these
are
scalars,
or
functions
that
accept
several
inputs
but
produce
only
a
single
output.
Hence
they
could
be
well
described
by
what
a
student
learns
in
the
calculus
courses
MATH
137,
138
and
237.
However,
if
one
thinks
about
what
makes
a
fluid
a
fluid,
they
probably
conclude
that
it
is
the
ability
to
flow
when
perturbed
(pouring
juice
out
of
a
jug).
This
means
that
the
fluid
velocity
is
the
variable
of
interest,
and
velocity,
as
we
know
is
a
vector
quantity.
Thus
we
are
talking
about
a
function
which
takes
several
inputs
and
produces
several
outputs.
The
calculus
of
vector-valued
functions
is
more
complicated
than
that
for
scalar
valued
functions
and
is
generally
not
taught
until
later,
in
the
course
AMATH
231.
This
means
a
student
is
faced
with
the
somewhat
difficult
task
of
building
up
two
years
of
calculus
knowledge
before
tackling
even
the
simplest
of
tasks,
such
as
defining
what
the
conservation
of
mass
means
for
a
flowing
gas.
In the past this has attracted many great minds to study continuum mechanics, and the general theory does contain a great deal of mathematical beauty. However there is no avoiding the fact that even if the project is carried out successfully, one is left with a highly nonlinear set of partial differential equations. This means that no general solution in terms of a mathematical formula is possible (I think this fact is intuitively obvious from the vast array of fluid phenomena we observe on a daily basis). We are thus left to wonder how any practical problems can be solved?
The answer lies in the process sometimes called mathematical modeling. The whole idea is to take the complete mathematical description and leave out as many pieces as possible. The choice of what must be kept is to be driven by the application of interest. Thus for example, if we do not wish to consider sound waves, we treat air, a gas, as if it were incompressible (the way water is).
The idea of a mathematical method dependent on the question we are trying to answer is a deeply disturbing notion and is largely at odds with the bulk of our mathematical training. Still there is no doubt that it has yielded impressive results. Long before computers, or even slide rules, estimates of phenomena as diverse as flow over air plane wings and the propagation of energy by waves on the surface of a lake were available.
Let's consider a simple example. Consider a river bend such as the one pictured below.
The first approximation would take the rather complex geometry of an actual river bend and replace it with an idealized one, say a quarter circle such as that pictured below.
Next we choose coordinates. It seems natural to adopt polar coordinates. But should they be two-dimensional polar coordinates or a full 3D set of cylindrical polar coordinates? This will depend on the level of detail in the desired description. For a first guess you might argue that most rivers are much wider than they are deep and hence we expect the horizontal components of velocity to be much larger than the vertical component. This reduces the number of variables from three to two. You might also argue that the horizontal velocities won't vary much with depth (except at the very bottom) and so we can assume that all functions are independent of the vertical coordinate (often labeled z).
This is a considerable simplification, but we can do better. Ask yourself, what is the most important thing water does as it rounds a river bend? It goes around the bend, of course. So the most important component of velocity surely must the one in the angular direction. Thus, to a first approximation we simply drop all others. Now, we have already argued that this single component of velocity does not depend on depth, however if we consider regions away from the inflow and outflow into the bend we could argue that how far along the bend we are should not influence the structure our flow has. Furthermore, it is consistent with the other approximations we have made, to say that the flow should not vary much in time. This would leave only a single component of the velocity that is a function of a single spatial coordinate (the radial distance r).
It may be a good idea to go through the assumptions for yourself. Once you have done so you will find that the problem has undergone a drastic simplification. Indeed, mathematics-wise we have gone from the troubling realm of partial differential equations to that of ordinary differential equations. Even though the actual equations of interest are beyond a discussion at the first and second year level, I think anyone can appreciate the strength of the approach.
A final point should, of course, be how one tests whether the approximations made were actually valid. Traditionally the only choice was to conduct a well controlled experiment, collect data, and confront that data against the predictions of the approximate theory. In the digital world it is more often the case that the differential equations governing the system BEFORE approximation are compared with the predictions of the approximate theory. Even if the approximate theory is not quantitatively accurate, it may well turn out that it makes the correct qualitative predictions and as such has considerable utility.