Bigger picture

# Irrational vs rational: does it matter? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource

When we write down the decimal expansion of a rational number—that is, a number which can be written as a fraction—we find one of two things: either the decimal expansion is finite, e.g. $\frac{1}{2} = 0.5$ or $\frac{1}{4} = 0.25$, or it ends in a recurring block of numbers, e.g. $\frac{1}{3} = 0.3333\dotsc$ or $\frac{1}{13} = 0.076923076923076923076923\dotsc$.

The irrational numbers, which cannot be written as fractions, are those for which this isn’t the case. Their decimal expansions are infinite and non-recurring. That’s an interesting mathematical distinction, but does it matter in real life?

Computers, which do most of our numerical calculations these days, cannot store infinite decimal expansions, so they cut them off after a finite number of decimal places. On the face of it, such rounding should not matter. After all, taking just three decimal places still gives you an accuracy of $1$ part in $1000$. In the 1960s, however, the meteorologist Edward Lorenz discovered that it does matter—a lot. Lorenz was running computer simulations of the weather. Given initial values that describe the weather today, the simulation would chomp through calculations that gave the corresponding values for tomorrow, the day after, and so on. These were the early days of computing, so his model wasn’t sophisticated enough to predict the real weather accurately. Nevertheless, Lorenz was pleased with his approach, seeing that it still gave realistic looking weather patterns.

One day he decided to run the same simulation twice. Rather than starting from the very beginning again, he took the numbers that the first one had produced half-way through and fed them in as the initial values for the second simulation—this shouldn’t make a difference, the second run would simply pick up where half of the first run had left off and the two should produce the same weather in the end. But to his surprise he found that the second run produced a completely different weather pattern. On inspection he realised that the initial values used for the second run had been rounded off. For example, rather than $0.506127$, which was what the computer had been working with during the first run, it had used $0.506$ in the second. This tiny discrepancy would not make much of a difference to the result of a single calculation, but the error had snowballed in the many repeated calculations the computer had performed.

What Lorenz had observed is now known as the butterfly effect, a term coined by Lorenz himself. The idea is that the tiny disturbance caused by the flap of a butterfly’s wing can escalate to cause a tornado halfway around the world. Technically the butterfly effect is known as sensitive dependence on initial conditions. It’s the hallmark of mathematical chaos. And it’s the main reason why we can’t predict the weather, the stock market, and many other processes with a high degree of accuracy.

You might like to try the problem Near miss, which has a related flavour.