Analogies and models: Part 1

During the data collection phase of research into the high school–university transition in chemistry,* my attention was caught by one comment that quite literally stated, “Our high school teachers lied to us.” I’ve heard this sentiment before, often in a semi-humorous way, but I was struck by this particular student’s vehement assertion. Why would students say such things, whether with ironic or sincere intent? One answer comes from educational research that demonstrates why we need to pay careful attention to how we teach theories and models.

Frames of reference

Two relevant concepts can be found in the work of Perry1 and Piaget2 on intellectual development. Perry’s progression can be thought of as illuminating a student’s intellectual outlook, while Piaget’s concrete and formal operational thinking describe the “intellectual toolkit” students can bring to their studies. While not directly related, they can both be considered measures of increasing maturity and intellectual sophistication. We can also relate them to Bloom’s cognitive domain taxonomy3 as shown below:

Representation of intellectual development as a growth process along both Perry's and Piaget's scales, relative to the cognitive levels of attainment on Bloom's taxonomic scale (revised).

Fig. 1: Representation of intellectual development as a growth process along both Perry’s and Piaget’s scales, relative to the cognitive levels of attainment on Bloom’s taxonomic scale (revised).

Briefly, Perry’s scale can be simplified to the dualist, multiplist and relativist categories (there is one more general category, but it is not relevant here). Dualist thinkers look at ideas as intrinsically either right or wrong, with little to no sense of partial correctness. Conversely, a multiplist thinker is able to accept differing representations and incomplete descriptions, while a relativist is comfortable with the idea that our current knowledge on a topic is provisional and may change in the future. Similarly, Piaget’s concrete operational thinkers can use strategies such as classifying, sequencing and using analogies from concrete experience; formal operation thinkers can, however, employ proportional and probabilistic reasoning and use abstract analogies.

Another relevant idea is that of threshold concepts:4

“A threshold concept [is] akin to a portal, opening up a new and previously inaccessible way of thinking about something. It represents a transformed way of understanding, or interpreting, or viewing something without which the learner cannot progress.” (emphasis added).

Much discussion has taken place around how to identify subject-specific threshold concepts, and whether there are ‘universal’ threshold concepts. Two proposals for the latter category are the concept of scale (e.g., geologic time, cosmic distance, size) and the nature of scientific theories and models. Given the reliance chemists place on the use of models, our continual switching between different scales (macroscopic to atomic), and extensive use of symbolic representations, it is hardly surprising that some — and perhaps many — of our students struggle to make sense of it all.

Unhappily for some students, this also intersects with the more familiar idea of misconceptions, in that these often prevent students from properly understanding threshold concepts. For example, common sources of misconceptions include:5,6 failure to appreciate the nature and limits of scientific models, inappropriate use of analogies, and use of flawed or incorrect analogies. Why do students make such errors? Sometimes, this stems from students’ incomplete or inaccurate prior knowledge; at others, however, it is the way we ourselves present models and theories. In fact, we may share the same misconceptions with our students! To illustrate, let’s consider how we introduce and teach the concepts of elements and atoms.

An elementary misunderstanding

Quick activity #1: Take 30 seconds to jot down all the ways you represent hydrogen in your teaching. Don’t continue reading until you’ve done this.

Done? How many do you have? I came up with seven in about as many seconds, but I’m already thinking of a few more. Now take a moment to try and categorize them: which represent the atom, the element, and/or the molecule? Do any represent multiple categories? How many differ in only one small but significant detail? What information do you intend to convey when you use them, and what information are your students likely to derive from them?

If your students are still dualist thinkers, then only one of these representations can be the correct one; multiplist thinkers can accept multiple representations but may still be confused as to the differences between them or their uses. And remember: we tend to

refer to all of these representations simply as ‘hydrogen’ — except none of these symbolic representations are actually hydrogen atoms (or molecules).

Quick activity #2: Without looking it up, jot down the definition of ‘atom’. Again, do this before you keep reading.

Done? Take a good, hard, critical look at what you wrote. Is it clear, or is there ambiguity in what you wrote? Here’s a list of definitions from various sources:

  • Smallest object that retains properties of an element
  • Smallest unit of an element, having all the
    characteristics of that element
  • Smallest part of an element that can exist chemically7
  • Smallest particle still characterizing a chemical element8

It could well be instructive to ask your students what they think the properties of an element might be, whether there’s any difference between an element and a chemical element, or if all particles are atoms. It’s also instructive to try and define ‘element’ without invoking the concept of ‘atom’. If you struggle with this, don’t feel bad — IUPAC can’t do it either!9 This exercise does, however, illustrate another source of confusion (and misconceptions) for our students: language. In the above example, we’ve used a number of common words (property, unit, element) in ways that are unique to chemistry; if a student were to try and understand atoms using the concept of ‘element’ the way it is defined in, say, physics or calculus, it would cause confusion and misunderstanding.

Quick activity #3: A metallic wire has the properties listed below. If you could isolate one single atom from this wire, which — if any — of these properties would it have?10

  1. Brown colour
  2. Conducts electricity
  3. Density of 8.93 g cm–3
  4. Expands on heating
  5. Malleable & ductile
  6. All of the above
  7. None of the above

I would suggest trying this out on your students. I’ve quizzed both high school teachers and university instructors; the initial reaction is often to pick density, although this is followed by puzzled expressions and second thoughts. The answer is, of course, ‘none of the above’ (so don’t assign this for marks!).

But the question does serve as a good illustration of how careful we need to be when talking about the properties of an element, since the way we explain things can undermine even our most careful definitions. Take, for example, how atoms were explained to me in high school — and, I suspect, how many of us have taught our own students:



models which can display copper atoms, starting from a copper kettle, a cube of copper, to individual atoms of copper

What is wrong with this picture? It works quite well from the macroscopic to the microscopic scale (copper shavings viewed under an optical microscope still appear the same colour, have the same density, expand when heated, etc.) But once we hit the sub-microscopic and nanoscale regimes, the analogy breaks down. For example, colloidal copper nanoparticles or films only a few atomic layers thick show important differences in both physical and chemical properties compared to the bulk metal, and these are different again from those of much smaller atomic clusters or single atoms.

Explaining why, for example, dilithium (Li2) has such spectacularly different properties to metallic lithium (Li (s)) becomes easier after a short introduction to molecular orbital theory; clearly, that is inappropriate for students just learning about elements and atoms. And while the excuse to watch Star Trek re-runs in school might be appealing, it isn’t very helpful either. Within the context of the high school curriculum, about the best one can do is explain that at some point there are insufficient atoms to maintain metallic bonding, and atomic clusters are better represented as small covalent networks (like diamond, crystalline silicon and silica.)

Popular science fiction references aside, this example illustrates one very important principle: all analogies have limitations, and break down at some point. It is therefore essential to determine beforehand, and then clearly communicate, these limitations when using any analogy in the classroom.

A Model Theory

As I write this, I have a stack of general chemistry texts opened up to the section explaining the ‘scientific method’. Although broadly similar, they differ in some details and specific definitions. And they are all wrong. Well, perhaps ‘wrong’ is over-stating the case a bit; let’s just say that, from the perspective of someone who has engaged in several decades of scientific research, they are all somewhat unsatisfying.

For one thing, actual research is far, far messier than the nice neat flow charts found in some texts. And even those that provide a more nuanced view fail to fully capture the chaotic reality that is scientific discovery. Our research question, for example, could be too vague, too naïve or based on a faulty premise. Likewise, our experiments might be misleading or inconclusive due to faulty methods, faulty instrumentation, unsuspected sources of variation or failure to extend observations over a sufficient range.

An example of this is the first set of atmospheric CO2 measurements made by David Keeling and the staff at the Mauna Loa Observatory.11,12 If Keeling had gone with the prevailing view at the time, he would not have made the extensive measurements that he did, and he would not have bothered developing a system to measure atmospheric CO2 to the nearest 0.1 ppm. Yet without this, the seasonal variation in atmospheric CO2 would not have been observed; more importantly, neither would we have the Keeling curve showing the year-over-year increase in atmospheric CO2.

Of course, things don’t get any easier once you do have data to work with. Most of the texts I consulted rightly distinguished between hypotheses, theories and laws (although quite when an idea ceases to be a hypothesis and becomes a theory isn’t always clear-cut.) Perhaps the least satisfying part of all the presentations, though, is the treatment of theories and models which are sometimes treated as synonymous and sometimes not. If, as seems to be the case, scientists have a hard time with these concepts, it is hardly surprising that students struggle to fully grasp them either. Yet correctly grasping the nature of scientific models is, as mentioned in the introduction, a key to progressing.

So what actually is a scientific model? It is, primarily, a representation of a theory that allows us to engage in thinking about that theory. As such, models may be real, physical or mathematical analogies, or purely symbolic. As an example, an orrery13 is a mechanical model representing an historical understanding of the motion of planets in the solar system. Modern mechanical and projection orreries are more correct models, useful for demonstrating the concepts of transits, eclipses and retrograde motion. But they are clearly still models, based on the underlying theories of gravity and planetary motion.

Good scientific models should have the following properties:

  • Descriptive power: The model should accurately describe the set of observations (data) from which the corresponding theory was derived.
  • Explanatory power: The model should provide insights into why the observed phenomenon generates the data it does.
  • Predictive power: The model should accurately predict behaviour one would expect to see for conditions both inside and outside the range of observations from which the corresponding theory was derived.

Note the use of the word ‘should’. Ideally, a model should have perfect descriptive, explanatory and predictive power. In reality, our models are often provisional, and only have predictive power within the range of currently observable data. They may also have limited descriptive power; that is, they may reproduce the experimental data to within some level of accuracy and precision (such as ±2%) that is considered good enough.

A mechanical heliocentric orrery made by R. B. Bate, circa 1812, showing the orbits of the planets and their moons

Fig. 3: A mechanical heliocentric orrery made by R. B. Bate, circa 1812, showing the orbits of the planets and their moons14. Medieval versions showed a geocentric view.

Limited, symbolic, and ‘good enough’

What are the implications of the true nature of scientific models for our students? Firstly, those operating at a concrete operational level of thinking (Fig. 1) struggle to differentiate between a model that uses a physical analogy for a theory, and an actual physical description. In other words, the model becomes reality, whether it is a realistic model or not. It is, therefore, essential to identify when a physical analogy is used to let us engage with a theoretical concept; the classic example of this is arguably wave-particle duality as applied to the theory of electromagnetic radiation (light). In this context, it is worth noting that dualist thinkers struggle due to their either-or thinking; they have difficulty in conceiving of both–and as a legitimate alternative.

A second implication is that models employing analogies can lead to problems if a student has either incomplete knowledge of, or prior misconceptions concerning, the physical system used in the analogy. For example, a student with a poor understanding of water flow through pipes is unlikely to benefit from this analogy in explaining the flow of electrons through electrical circuits, and may well reach false conclusions that impede problem-solving and further learning.

A related observation is that students often latch onto extraneous or irrelevant features of a theory or model, but miss small yet vitally important details. Actually, that’s true for many things. Consider, for example, the following common misconceptions and misrepresentations:

  • The speed of light is constant (2.998 x 108 m s–1)
  • A molar solution is 1 mole of solute in one litre of water
  • pH values are always positive
  • Titration reactions are calculated as M1V1 = M2V2.
  • Solutes dissolve readily in water because there’s space between the water molecules

I’m sure you can add plenty of your own examples — science is full of such picky little details, waiting to trip up the unwary student or instructor. Why are we vulnerable to this? Likely because, as Keith Stanovich15 puts it, we tend to be cognitive misers: we all have a natural tendency towards simplifications, generalisations and intellectual shortcuts, because it makes thinking easier.

Another factor lies squarely on us, the instructors, however, and that is the manner of our presentation. We know from studies of working memory, for example, that rich multimedia presentations involving simultaneous complex visuals and text or audio can overload working memory.16,17 Over-simplified or unrealistic visuals, on the other hand, can actually lead to misconceptions and false analogies.

Take, for example, a salt dissolved in water: showing all the water molecules in a realistic and dynamic way makes it hard to spot the cations and anions; omitting the water molecules, however, creates the impression that there is a great deal of space between particles in solution. A better approach is to let the student toggle the appearance of the water molecules, so they gain a better appreciation for all the interactions and chaotic motion that occur in solutions.

A third implication, therefore, is that we should be explicit in drawing attention to the key features of any analogy or model, as well as clearly delineating the limits and usefulness of the model. In other words, we should teach the limits! This brings us back to the concept of the “good enough” model — one that works very well at describing, explaining, and even predicting things over a prescribed range of conditions, and is easier to use than the more complex version needed to adequately describe an extended range of conditions.

At the high school level, this is most clearly seen in the evolution of the models used to explain atomic, bonding and acid-base theories. I’ll take these topics up in detail in part 2 of this article. For now, however, let me leave you with one final important principle: when working through the development of any theory, never say, “So we throw this model out and get a new one!” You’ll see why in part 2.

Part 2 - The evolution of models, March 2018



†  Length scales of < 1 mm corresponding to volumes < 1 nL; and length scales of 1 nm or less.

‡  It more closely resembles a game of Snakes and Ladders than a simple flow chart.

References (online accessed 2017)

  1.  D.C. Finster, “Developmental instruction Part I: Perry’s model of intellectual development.” Journal of Chemical Education, 1989, 66(8), pages 659-661.
  2. J.D. Herron, “Piaget for Chemists: Explaining what ‘good’ students cannot understand.” Journal of Chemical Education, 1975, 52(3), pages 146-150.
  3. L.W. Anderson and D.R. Krathwohl, et al. (eds.),A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, 2001, Addison Wesley Longman Inc., New York NY.
  4. J.H.F. Meyer and R. Land (eds.), Overcoming Barriers to Student Understanding: Threshold concepts and troublesome knowledge, 2006, Routledge, New York NY.
  5. V. Talanquer, “Commonsense chemistry: A model for understanding students’ alternative conceptions.” Journal of Chemical Education, 2006, 83(5), pages 811-816.
  6. V. Kind, "Beyond Appearances: Students’ misconceptions about basic chemical ideas", 2nd edition, Royal Society of Chemistry, 2004.
  7. Oxford Dictionary of Chemistry, 3rd edition, 1996, Oxford University Press, Oxford UK.
  8. IUPAC Gold Book:
  9. IUPAC Gold Book:
  10. Adapted from R. Ben-Zvi, B. Eylon and J. Silberstein, Journal of Chemical Education, 1986, 63(1), pages 64-66.
  11. D.C. Harris, Quantitative Chemical Analysis, 8th edition, 2010, W. H. Freeman and Company, New York NY, pages 1-5.
  12. D.C. Harris, “Charles David Keeling and the Story of Atmospheric CO2 Measurements.” Analytical Chemistry, 2010, 82(19), pages 7865–7870. On-line (2017):
  13. “Orrery”, Wikipedia
  14. Bate orrery, photography by Birmingham Museums Trust - Birmingham Museums Trust, CC BY-SA 4.0.
  15. K.E. Stanovich, “Rational and Irrational Thought: The thinking that IQ tests miss.”  Scientific American Mind, 2009, November/December issue, pages 34-39.
  16. D.C. Stone, “Learning styles: fact and fiction”, Chem 13 News, October 2014, pages 13-15.
  17. D. Clarke, “Mayer & Clark — 10 brilliant design rules for e-learning”.