Fermat’s Last Theorem
Very often, only after first intuiting a result does a mathematician proceed with the rigorous attempt to demonstrate the result through proof. Every so often, completing the rigorous proof takes much longer and requires more ingenuity than originally anticipated. Once in a millennium the rigorous proof takes over 350 years.
Fermat’s Last Theorem was one of those.
If you remember anything from 8th or 9th Grade Geometry, chances are it’s the Pythagorean Theorem. For any right triangle (any triangle in which one of the three angles is 90⁰), the square of its longest side (the hypotenuse) is equal to the sum of the squares of the two remaining sides. This is usually expressed by the equation a2+b2=c2.
Famously, the Scarecrow in the movie The Wizard of Oz misstates the Pythagorean Theorem when he says, “The sum of the square roots of any two sides of an isosceles triangle is equal to the square root of the remaining side!” He doesn’t just misstate the Pythagorean Theorem, he gets it wrong in three different ways, applying it to the wrong type of triangle, failing to recognize that the hypotenuse is the only valid “remaining side,” and using square roots instead of squares.
Still, you have to admire the confidence with which he says something so incorrectly. I don’t think I’m being a pedantic know-it-all by pointing out these errors. Indeed, the movie writers wanted us to know it was wrong. Just as 0% correct on a true/false test can only be achieved by someone who knows all the correct answers, the Scarecrow’s errors could only have been put there intentionally.
This of course makes the irony all the more delicious when the Scarecrow marvels at his new-found brilliance exclaiming, “Oh Joy! Rapture! I’ve got a brain!” Sadly, the worth of real diplomas today is not much different than the Scarecrow’s fake one in 1939 when the movie was released, but I digress.
It’s sad that many remember the Pythagorean formula without ever knowing the amazing meaning behind it. Suppose you wanted to construct a square whose area was equal to the total area of two other squares. How would you do it? The Pythagorean Theorem tells us that you place the two existing squares corner-to-corner so that adjacent sides of the two squares form a right angle in between them, as shown by the squares in Figure 9.
Figure 9: Visualization of the Pythagorean Theorem.
A new square built off of the hypotenuse line segment drawn between the corners of the existing squares has an area that is exactly the sum of the areas of the existing squares. If the existing squares have sides of length a and b, respectively, and the new square has sides of length c, then the equation expressing this relationship is a2+b2=c2.
In reference to Figure 9, while we may remember this equation in relation to the right triangle inscribed by the three squares, we forget (or never knew) what the equation illustrates by the squares that reside on the outside of the triangle.
This had incredibly practical implications. For one, it gave people a way, and indeed the only way, to construct a square whose area was exactly 2. Since a square of area 2 has sides equal to the square root of 2 (or √2), and since √2 is an irrational number and therefore cannot be exactly written or constructed on its own, there was no way to reliably draw such a square.
But by constructing two 1 × 1 squares (that is, two “unit” squares each having an area of 1) and placing them corner-to-corner in the manner shown in Figure 9, the tilted square formed using the hypotenuse line segment as one of its sides has an area of exactly 2. What’s more, the length of the hypotenuse is exactly √2.
From this simple example, we can understand how the relationship a2+b2=c2 allowed people in the ancient world to construct plots of land and to calculate distances that were thought impossible before, and that argument is no strawman.
The Pythagorean Theorem has an infinite number of integer solutions,[1] such as, a = 3, b = 4, and c = 5. In other words, there are an infinite number of ways to find a sum squares of two integers that is itself a square of an integer.
What about cubes? Are there any integer solution to a3+b3=c3 such that the sum of two integer cubes is itself a cube of an integer? How about for any other integer n greater than 2, are there any integer solutions for an+bn=cn?
Around the year 1637, French mathematician Pierre de Fermat scribbled what came to be the most notorious marginal note in history (or at least in math history) in his copy of the ancient Greek text on mathematics, Arithmetica. The substance of the note was that there are no integer solutions for an+bn=cn where n is an integer greater than 2. Moreover, Fermat noted that he himself had devised a clever proof that was, alas, too long to contain in the margin.
Spoiler alert: Fermat never produced the tantalizingly promised proof.
The search for a proof to Fermat’s conjecture became mathematics’ equivalent of the sword stuck in the stone, and generation upon generation saw many would-be kings try and fail to prove their worthiness by withdrawing Excalibur.
With the advent of computers, it ultimately became possible to demonstrate by brute force calculation that no integer solutions could be found up to very large numbers, leading mathematicians to believe that Fermat’s conjecture was indeed true even if his claim of an elegant proof was probably not.
Even though no proof existed (yet), the conjecture became known as a full-fledged theorem, owing not just to the failure of finding a counterexample, but perhaps more importantly to the intuitional sense that Fermat’s Last Theorem just, well, seemed true.
Did Fermat stumble across one of the unprovable truths that Gödel’s First Incompleteness Theorem says must exist? Could it really be that there are truths accessible only through intuition and not through rational logic?
Recall that Gödel did not say that every system must have unprovable truths, but that every system must have truths that are unprovable from within the system itself. The paradox is only a paradox when viewed from within the constraints of the system. If one is able to transcend the system, to go above it or beyond it, the truth can be revealed.
Take, for example, the M.C. Escher print, Drawing Hands, reproduced in Figure 10. From within the picture, the scenario is a paradox of the chicken-and-egg variety. How can a first hand be drawing a second hand if the first hand didn’t exist until the second hand drew it? But we the viewers reside outside of the picture, and are able to discern that an unseen hand of an unseen artist must have drawn them both.
Figure 10: M.C. Escher’s “Drawing Hands”
This is analogous to how the proof for Fermat’s Last Theorem was ultimately accomplished. Rather than continue to hammer away at the problem in the confines of a “space” where it seemed impossible, the problem was translated into another “space” where it was easier to handle.
In this case, that other space involved the properties of modular forms and elliptical curves, which made it much easier to prove the non-existence of solutions that Fermat’s Last Theorem posited. It turns out that there are some strict rules about what is possible and impossible in the back-and-forth between modular forms and elliptical curves, and those rules were used to greatly simplify what otherwise seemed to be an insurmountable task. As is too often the case, this required the separate proof of another conjecture[2] along the way, which itself took quite a bit of doing.
When all was said and done, roughly 350 years after Fermat scribbled his note, the proof of Fermat’s Last Theorem was finished by Andrew Wiles, earning him the Abel Prize (the mathematics equivalent of the Nobel Prize). Befitting the one who would finally draw Excalibur from the stone, Wiles was a Brit. In the end, although portions of the proof were quite elegant, it is highly unlikely that the long and meandering effort was anything like what Fermat had in mind (or thought he had in mind).
In the example of Fermat’s Last Theorem, the theorem itself is not the paradox. The method of proof was also not technically a paradox. Instead, the paradox resides in how agonizingly difficult it was for rigorous logic to prove a truth that intuition so readily grasped.
Trying to understand this contradiction illustrates four things:
- First, many problems that seem intractable or even paradoxical from one perspective are solvable if we can manage to change our perspective.
- Second, changing our perspective is not merely an exercise of looking at things from a different angle, but from a whole new space that functions under a different set of rules.
- Third, getting out of our current space and into the new space involves a transformation that transcends both spaces.
- And fourth, we are willing to take the risk of transformation because we have already seen through intuition, and perhaps in faith, that truth indeed awaits us on the other side.
Cursion and Recursion were in a boat, Cursion got out, who was left?
As we explored earlier in the blog, many paradoxes are self-referential in some way. Recall Russell’s paradox of whether the set of all non-self-containing sets contains itself; and recall the sentence, “This sentence is false.” Both are explicitly self-referencing in a way that creates a contradiction.
Self-reference can also present itself implicitly. For example, it can be said that W.K. Clifford’s statement, “[i]t is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence,” is itself a statement of belief based upon insufficient evidence. Therefore, the statement is also about itself (and moreover contradicts itself) even though it doesn’t appear that way on its face.
While these seem like mere rhetorical sleights of hand for our pondering amusement, it turns out that self-reference helps uncover truth, beauty, and meaning. In mathematics, self-referencing (or recursive) functions are powerful tools for exploring the properties of complex systems like the weather.
When trying to characterize complex systems mathematically, we need to keep in mind that such systems are not of the set-it-and-forget-it type. While their behaviors may be governed by deterministic physical laws, they become more unpredictable the farther out you try to predict them. This is due to a phenomenon called high sensitivity to initial conditions, which can make the behavior of a complex system seem chaotic.
The recursive nature of complex systems comes from how each new instant depends on the end state of the preceding instant. Because this feedback happens constantly, small variations can be quickly amplified like a microphone receiving its own sound back from a speaker. Imperceptible perturbations in the system can wreak havoc down the line.
The surprising thing is that the behavior, while unpredictable, remains bounded.[3] If you mapped the output of the recursive functions that governed the system, a complex and striking (often beautiful) shape would emerge that was both well-defined but never repeating. These shapes are called strange attractors because no matter how you tweak the system, the system keeps settling back into following the pattern of the attractor, as if pulled along by an invisible magnet.
Figure 11 shows an example of a Lorenz attractor, generated from a set of three simple recursive differential equations that Edward Lorenz used in 1963 to model the atmosphere of Earth.
Figure 11: Computer image of a Lorenz strange attractor (taken from https://paulbourke.net/fractals/lorenz/)
Strange attractors belong to a class of shapes called fractals, whose defining characteristic is that they appear to have the same amount of detail no matter how far you zoom in on them. The name “fractal” is meant to be shorthand for “fractional dimension,” meaning that the shapes are so complex that when you calculate their dimensionality, it is fractionally higher than the dimension they were created in.
For example, a two-dimensional fractal would have a calculated dimension between 2 and 3. One of the simplest fractals is called the Serpinski triangle, which is constructed using the recursive formula of removing a central triangle from the middle of a larger triangle, and then repeating the same removal from each of the triangles that are left over, and so on into infinity. Figure 12 shows an approximation of a Serpinski triangle.
Figure 12: Serpinski Triangle
One amazing property of fractals such as the Serpinski triangle is that they end up having a surface area that vanishes and a perimeter that gets infinitely long. Talk about a paradox! How can a shape end up with zero area and infinite perimeter?
Consider Figure 13, which shows iterations in the construction of a Serpinski triangle. At iteration 0, nothing has been removed from the original equilateral triangle, which has an area of Ao and a perimeter of 3S (each side having length S).
At iteration 1, a central triangle is removed as shown. The removed triangle has sides of length ½S, and an area of ¼Ao. As such, the parts of the original triangle that remain (the shaded parts of Iteration 1 in Figure 13) have an area of ¾Ao and a perimeter of .
Figure 13 shows the area and perimeter at each of the next two iterations, along with the general case, which is any arbitrary iteration N. At iteration N, the area is given by the formula and a perimeter given by the formula .
If you’re not a math whiz, then take my word for it that, as N approaches infinity, the area goes to zero and the perimeter goes to infinity. In fact, when N is 100, then the area is less than quarter of a ten billionth of a percent of the original area, and the perimeter is about 1.2 × 1018 times the original side S.
That means if the original triangle was about the size of a piece of paper, the area would be smaller than the size of subatomic particles and the perimeter would be longer than ten Milky Way galaxies strung out end-to-end, the long way.
Figure 13 – Constructing a Serpinski Triangle
The point of this exercise is to appreciate how self-reference and recursion can be the engine that drives the emergence of paradoxes. Specifically, we’ve seen how seemingly random and chaotic systems still can behave in a deterministic fashion when they are describable in terms of recursive formulae.
(As an aside, this realization formed the basis for the reconciliation between free will and determinism that I posited in my undergraduate honors thesis, as mentioned in the first post of this blog.)
We’ve also caught a glimpse of how the curious objects called fractals can challenge our ideas about infinity, and how the infinitesimally small and the infinitely large are tangled together in a symbiotic relationship. And as we’ll see in upcoming sections, we’re not done encountering the strange nature of infinity.
In recursive mathematics, a formula spits out little versions of itself that in turn spit out little versions of themselves, and so on. While each version may not be identical, they carry with them the signature of their “maker” – a functional essence that allows them to create their own offspring who also have the ability to create and to pass along this essence.
The recursive nature of creating creative creatures that share the image and likeness of the creator is itself an engine that produces transcendent truths, higher meanings, and wondrous beauty. In like manner, we can recognize that we are creative creatures caught in a recursive strange loop, but we should also recognize that we ourselves are strange loops.
If we were not strange loops, we could not participate in propagating the recursion. As Douglas Hofstadter explicitly and provocatively hypothesized, our consciousness is tied up in, and owes its very existence to, the fact that we are all strange loops.[5]
Whereas Hofstadter was reluctant to make the clear connection to theology and the nature of our Creator, Dorothy Sayers was not so timid. She concluded after thorough examination that when we create, it is revealed that, “[t]he mind of the maker and the Mind of the Maker are formed on the same pattern, and all their works are made in their own image.”[6]
By creating (and indeed by procreating), we are imitating God and iterating the recursive function of His creation. When our creative activities are ordered to God’s, they are joined to His, which allows us to participate in the transcendence that in turn transforms us. We become co-participants in our own salvation!
This is what is meant by Philippians 2:12-13: “[W]ork out your own salvation with fear and trembling; for it is God who is at work in you, enabling you both to will and to work for his good pleasure.”
C.S. Lewis discussed the paradox of Philippians 2:12-13 in Mere Christianity. He observed that in this single sentence we are first advised to work out our own salvation with fear and trembling (which sounds like everything depends on our good actions), and then we are told that it is God who works in us and through us (meaning that God is responsible for everything we do that is good). So, which is it?
It should not be a surprise by now that it is both. God invites us to participate in His goodness and to join our triumphs and sufferings to His. As C.S. Lewis said, this is the sort of thing we can expect from Christianity, and so we ought to learn to embrace it.
If we are strange loops, what does that entail? It must at least entail the ability to reflect upon one’s own existence in self-reference, and thereby transcend it – to be able to get outside of one’s own thoughts, and be aware of one’s own awareness. This is the essence of what we call consciousness.
It also must entail the ability to create recursively – to generate new creations that themselves are able to create recursively. Since our creative powers come from our Creator, those powers are unable to transcend and propagate the strange loop unless we join those efforts back to His.
To put it in the language of recursive mathematics, we take God’s output as our input, calculate the result, and plug it back into the recursive function of His will so that through us He continues to produce new and wondrous outputs.
The beauty is that, so long as well are willing to offer it all back to Him, it doesn’t matter how good or worthy or valuable our contributions are. In fact, God can use our suffering and failure just as fruitfully as our joy and success.[7] His will is a strange attractor whose pull we cannot escape.
A Leap into the Quantum World
For those who doubt that there is a spiritual existence “above us” governed by physics very different from that of our visible universe, they should remember the reality of the quantum realm “below us,” whose physics are perhaps even more strange.
If it sounds like I consider the quantum realm to be a different universe from “ours” instead of merely the fundamental particles that constitute it, that’s because in a way it is. We can’t even observe the workings of the quantum realm without disturbing it in ways that affect our measurements. It is a realm where stuff exists not as “stuff” but as superimposed probabilistic states.
What appear to be particles behave like waves, and what appear to be waves behave like particles. There is no continuum of motion nor of quantity. When things move from point A to point B, they don’t traverse the space between. When an excited particle calms down, it gives off energy, but only in discrete packets.
Certain types of particles, while being forbidden from occupying the same quantum state (thus preventing the quantum realm equivalent of embarrassingly showing up at a party wearing the same outfit as someone else), are not forbidden from passing through walls in ghost-like fashion. And that’s just the beginning.
The term “quantum,” which is Latin for quantity or amount, was claimed for use in physics by Max Planck around the year 1900 to refer to the idea that matter and energy are made up of discrete packets – minimum amounts that could not be further reduced. He called these packets quanta. These quantized amounts are extremely small, but together in large enough numbers will combine to appear as a continuum governed by classical physics.
The phrase “quantum leap” is often misused in reference to large advancements when it more appropriately refers to abrupt movements from one state to another without residing between them, and typically at a tiny scale.[8]
While the quantum realm is exceedingly strange, the paradoxes it produces are not necessarily paradoxes within the quantum world itself, but rather become paradoxes due to how quantum physics relates to classical physics.
In other words, we run into paradox when trying to reconcile the strange behaviors occurring in the quantum realm with the physics we experience in the classical world, and how one transitions into the other. Not the least of which paradoxes is how the weirdly random and non-deterministic world of quantum “stuff” constitutes everything we can observe that is ultimately governed by purely deterministic physical laws.
Somehow, God found a way to turn a game of chance into results that repeat without fail. Grappling with the paradoxes presented in the transition from the quantum realm to the classical universe is instructive because it may provide some insights into the paradoxes presented when trying to understand the mysteries of spiritual existence while being stuck here in our physical existence.
In the quantum realm, certain fundamental particles like electrons cannot have well-defined values for both position and velocity. That means they are never in one place at one time, and they are never moving at the same speed or direction. They are “smeared out” in both space and in time.
It’s not just a matter of what we’re allowed to observe, it’s that they really don’t have position and velocity as well-defined quantities. However, as an observer, I can force one of those quantities to be well-defined enough that I can measure it, but only one at a time. The other quantity, the non-measured one, becomes more undefined in proportion to how well I have been able to determine the measured quantity.
Let’s say I was theoretically able to determine the velocity of an electron with near perfect precision. That would mean that the electron could be anywhere in the universe. This strains our concept of what it means to measure and to observe if by observing we disturb the system; and this strains our concept of what it means for a thing to exist if its state of existence is in fact indeterminate.
So what the heck are electrons anyway? It’s difficult to say. Every analogy available to compare electrons to things with which we’re familiar quickly breaks down. We can say that they sometimes behave like particles, and sometimes behave like waves, but that doesn’t mean that they morph between being a particle and a wave from one instant to another. Or does it?
About the best we can do is to derive some mathematical equations that accurately describe electron behavior. But even these equations don’t gibe with our experience in the macro world. Instead of describing tangible entities that have well-defined dimensions, positions, and velocities, these equations are more akin to “probability functions” that attempt to replicate how electrons are smeared out in time and space in a superposition of possible states, as well as how electrons are both allowed and forbidden to interact with other “stuff” in the quantum realm.
What we can’t do is reconcile how such smeared out, undefinable packets of probability come together to form matter (physical stuff) in the macro world that behaves in a well-defined, highly deterministic, and highly predictable manner. Maybe the best we can do is to be content with saying that electrons are simply electrons and not really like anything else.
Both the quantum world and the macro world constitute our physical universe, and thus they must somehow be reconcilable. Any reconciliation of the quantum realm with our macro world must account for how our macro world operates in a continuum whereas the quantum realm functions in discrete jumps.
What do I mean by that?
When you, standing at point A, throw a ball to your friend who is standing at point B, the ball follows an arc-like path from A to B, traversing every one of the infinite points along that path. The ball does not disappear at one point only to reappear at another point along the path. The space in between is continuous. Moreover, the mathematics that perfectly describes the behavior of the ball, including launch angle, spin of the ball, effects of gravity, air resistance, and wind, the exact positions of points A and B, and so forth, are the mathematics of a continuum, not of discrete, discontinuous jumps.
Meanwhile, in the quantum realm, the fundamental “particles” that constitute the matter that makes up the ball do not follow a continuum. In the quantum realm, there are discrete packets of energy that cannot be further divided, there are separated “levels” from which and to which particles can jump back and forth without ever residing anywhere in between, and there are rules that permit particles to exist only in certain states, such as spin up or spin down, but not in any others.
How do we get from that discrete quantum world of packets and jumps to a continuous macro world of smooth motion?
Attempts to reconcile the discrete quantum realm with the continuous macro world have focused on three principles, namely quantum decoherence, the correspondence principle, and statistical averaging.
As we’ve seen, quantum realm particles exist in superpositions of states (what we’ve called the smearing out in space and time), but when they interact with their environment and other particles, they are forced to “choose” a definite state, thereby rapidly shedding their superposition. This process is called decoherence, and happens because of certain exclusionary rules in the quantum realm.
For example, two electrons cannot occupy the same quantum state, and so they are forced to “choose” from among possible variant states when they encounter each other. Before they met, they would each exist in a 50/50 superposition of up and down spins, both content to remain uncommitted. But when they interact, they cannot remain uncommitted – one has to be up and the other has to be down.
It’s like two people passing simultaneously through a narrow doorway. They can’t both go straight down the middle. One picks one side and the other picks the other and they can safely pass.
The correspondence principle and statistical averaging both rely on the fact that any macro system is composed of quantum particles and effects in such vast numbers that the probabilistic smearing and discreteness of the quantum realm disappears. One flipped coin could be either a head or a tail. Thirty quadrillion[9] flipped coins would always produce a 50/50 heads-to-tails result with an accuracy that could not be distinguished from perfect, even though each individual coin’s flip is, well, a coin flip.
This also accounts for how the random and unpredictable motion of individual gas molecules (quantum behavior) in a closed volume results in completely predictable and deterministic pressure and temperature (classical behavior) every single time.
But do any of these principles really tell us how the quantum realm transitions from discreteness into continuity? They explain how we can get from one extreme to the other, but they don’t really explain anything about the transition.
Perhaps there is no transition because the physical world simply doesn’t have continuums, just the illusion of continuums produced by the averaging effect of an extremely large number of quantum events. In this scenario, everything is ultimately made up of discrete packets, which are governed by the physics of the quantum world.
As such, perhaps the classical physics that we experience is everyday life – throwing balls, flipping coins, driving cars, boiling water, switching on a light, and so forth – is merely an approximation (albeit a very, very good approximation) of what would happen if the underlying quantum physics were to be repeated an infinite number of times.
We’ll get to infinity in the next section, but suffice for now to wonder whether infinitesimal discreteness is the key to avoiding the strangeness of infinities, but at the price of introducing the strangeness of the quantum. Amazingly, in this way the smallest dimensions of the quantum realm are linked to the unfathomable vastness of the known universe because, if the physical world is made up of discreteness, that perhaps implies that the universe itself is finite.
Before we leave the quantum realm, do you recall what was said earlier about the paradoxes presented by the quantum realm not necessarily being paradoxes within the quantum realm itself? Well, “quantum entanglement” is one of the reasons for the “not necessarily.”
Quantum entanglement refers to the phenomenon where two or more quantum particles become interconnected in such a way that the state of one particle instantaneously influences the state of the other(s), no matter how far apart they are. Albert Einstein famously described quantum entanglement as “spooky action at a distance.”
Not only does such a phenomenon defy what we know about classical physics, it’s hard to make sense of it even in the quantum realm. The simplest illustration is two entangled electrons. Each electron has a spin that can be either up or down. When the electrons are entangled, their spins are linked. While the spins remain unmeasured or unobserved, each electron exists in a probabilistic superposition of possible spin states.
If the spin of one of the electrons is determined through measurement, then the spin of the other electron will instantaneously be the opposite, even if the electrons are separated by vast distances and even if the measurement occurred long after the electrons became entangled.
Instantaneous communication between particles no matter the distance between them seems to be the stuff of science fiction, but it has been experimentally confirmed many times over. In fact, quantum entanglement plays an important role in the emerging discipline of quantum computing.
Quantum computing enables superfast computation by harnessing the power of hyper parallelism that comes from suspending bits of quantum information in superpositions of quantum states, resulting in a system that essentially already represents all possible computational states. Generating a result then becomes more an exercise in selection and extraction than of calculation, which has already been done.
The idea is that the old way of crunching binary bits one after another in linear algorithmic fashion until a solution is reached gives way to a different approach, which allows all possibilities and then extracts the one you need – like Dr. Strange in Avengers: Infinity War.
At a high level, quantum computing seems to have much more in common with intuition than algorithmic calculation. Recall from a prior blog post that Roger Penrose used Gödel’s incompleteness theorems to conclude that human consciousness is not the product of an algorithmically functioning mind, but rather has a non-algorithmic quality.
Penrose ultimately latched on to quantum effects as a model for consciousness, arguing that consciousness is based on the non-computational collapse of coherent quantum superpositions between cellular structures within neurons known as microtubules.
While remaining unsubstantiated, Penrose’s ideas on consciousness raise some interesting questions. If quantum effects are responsible for consciousness, can the randomness and superposition of states of the quantum world be analogous to what we experience as free will? Like a quantum computer, all the possible calculations exist, but it is up to us to extract the ones we choose to implement.
Does this help explain how God can accomplish His plan while affording us free will? After all, our trip into the quantum realm seems to demonstrate that deterministic physics can flow out of indeterminate randomness.
One thing we know about consciousness is that it allows self-awareness – the human brain can think about itself. This gives rise to the concepts of self-reference, recursion, and strange loops that we discussed earlier. We explored not only how strange loops related to paradox and how they can reveal higher level meaning, but we also saw how simple recursive systems can exhibit behavior that seems random and chaotic while remaining bounded to the fractal pathways of strange attractors.
Perhaps recursive chaos also provides a model for free will within God’s determined plan since, as with the quantum realm, unpredictability at the individual “decision” level ultimately gives way to a fully deterministic system at the big picture level.
If God can create a physics that harnesses unpredictability to bring forth order, surely God can accomplish His plan while affording us the dignity of free will and free choice. Yes, indeed, God writes straight with crooked lines.[10]
Monkeying around with Infinity
What does infinity mean to you? Is it some really big number that is bigger than any other number you can think of? How do you visualize something that, no matter how far you travel, you are still no closer to than when you started?
One whimsical way to contemplate infinity is through the infinite monkey theorem, which states that a monkey hitting typewriter keys at random for an infinite amount of time will type out every possible combination of letters, including the complete works of William Shakespeare. A restatement of the infinite monkey theorem is that, if you had an infinite number of monkeys typing, at least one of them would randomly replicate Shakespeare’s Hamlet.
Over the years, infinite monkeys producing Shakespeare via random typing became somewhat of a pop culture trope. Even the TV show The Simpsons got in on the act when Mr. Burns dropped in to check on his team of typing monkeys, only to find that all they could manage was, “It was the best of times, it was the BLURST of times.” Stupid monkeys.
What pop culture gets wrong about the infinite monkeys scenario is in treating it like a commentary on probability. In truth, the infinite monkeys scenario has nothing useful to say about probability since it is so insanely and ridiculously contrived that it makes no sense in our reality.
For example, if every subatomic particle in the known universe was a monkey with a typewriter, and they were all typing from the moment of the Big Bang until now, we would still be something like 1010,000 orders of magnitude from even getting within sniffing distance of a 1 in a trillion chance of producing just one act of Much Ado about Nothing, which is a tragedy.
To say that insanely improbable events start to become possible as we approach infinity is not a statement about how insanely improbable the insanely improbable event is, but rather is a statement about the nature of infinity.
Our instinct is to treat infinity as a number – a really, really, really big number, but a number nonetheless. But infinity is not a number. It’s a concept. And treating it as if it were a number puts us smack dab in the middle of paradox.
The concept of infinity was first recorded around 600 B.C., and was used in the context of philosophy, not numbers or mathematics. From this time forward, infinity was treated as an intellectual vulgarity – a rude way to destroy a polite philosophical discussion. This largely owed to infinity’s unique ability to produce paradoxes.
By extrapolating out into infinity, anything thought impossible could be made possible, and anything ridiculous could be normalized – just like monkeys typing Shakespeare. In 1775, patriotism may have been the last refuge of the scoundrel (as Samuel Johnson famously quipped), but two thousand years plus before that, the scoundrel who wanted to lob a verbal grenade could duck for cover behind a wall of infinity.
There was simply no way to overcome being shouted down by infinity. And while humankind finally got over the shock value (thankfully, or the Calculus would have never been developed), we’ve never solved the paradoxes. Instead, we tend to ignore the singularities and treat infinity like just a normal, albeit really, really, really big, number.
As with most incomprehensible things, sometimes it’s easier to contemplate them by flipping the script and approaching them from the other side – in the case of infinity, from the perspective of the infinitesimally small rather than the infinitely large.
Consider that infinity can be defined as the point at which the following limit relationship becomes true: . This means that as we let n get bigger and bigger, the quantity gets smaller and smaller until n becomes big enough for 1/n to be exactly equal to 0, which is the point when infinity is reached. Of course, that never happens.
But the crazy thing about infinity is that, if we were able to make n big enough for 1/n to be exactly equal to 0, then the “1” in the numerator could be any positive finite number, like the size of the know universe in millimeters, and the identity with 0 still holds.
That’s another way of saying that no matter how big of a number you can dream up, in comparison to infinity it might as well just be 1. That’s not the behavior of a normal number. That’s the behavior of a concept so powerful it can eliminate all the distinctions between numbers that make a real difference in our lives.
While infinity sees no difference between a ten-ton truck and a three-pound scooter, you will surely notice which one is parked on your foot. So, although relating monkeys on typewriters to infinity may be a striking visualization, it’s really just sidestepping the issue in favor of illustrating its absurdity.
But is the concept of infinity merely absurd?
Recall the discussion of the halting problem earlier in this blog. Alan Turing used it as a way to map Gödel’s incompleteness theorems onto the world of computers. The halting problem involves determining whether an arbitrary computer program will finish running or will continue to run forever.
Turing found that the halting problem is undecidable, meaning that no general algorithm exists that solves the halting problem for all possible computer programs and inputs. This is tantamount to saying that in order to know that a computer program will run forever, we must wait until forever is complete.
But forever is just another word for infinity, and we know we can never reach infinity – we can’t even get closer to it. And yet, intuitive minds like ours, ones that don’t function algorithmically, can discern what programs will fail to halt.
Does that mean our minds can arrive at infinity? Hardly. But it does mean that we are capable of recognizing higher level truths that cannot be reached from inside the system.
Some of the most fascinating and enduring illustrations of infinity were some of the very first. Zeno was an ancient Greek philosopher who lived from 490 B.C. to 430 B.C. He used a series of philosophical arguments, which today are called Zeno’s paradoxes, to challenge the notion of space and time, even to the point of making all motion seem to be an illusion.
For example, he considered the task of walking from point A to point B. In order to accomplish that task, one must first traverse half the distance. But before arriving at half the distance, one must first traverse half of that distance, or one quarter of the total. And before arriving at one quarter of the distance from A to B, one must traverse an eighth of the total. And so on.
In this way, the task of walking from point A to point B can be forever divided into infinitely many smaller steps, making the one simple task into an infinite number of smaller tasks. Zeno argued that an infinite number of tasks cannot be completed in a finite amount of time, and therefore motion itself must be an illusion.
If that particular paradox of Zeno’s doesn’t fool you then try this one.
Achilles challenged the Tortoise to a race. Because Achilles was much faster, he graciously gave the Tortoise a sizable head start. When the gun when off to signal the start of the race, Achilles quickly arrived at the spot where the Tortoise began. But even though Achilles was fast, it took a certain amount of time for him to arrive there, and in that time, the Tortoise also advanced. A short time later, Achilles arrived at the place where the Tortoise was when Achilles was at the head start location. Again, the Tortoise was not there, having also advanced a small distance. Each time Achilles arrived at where the Tortoise had once been, he found the Tortoise had also moved forward.
Therefore, try as he might, through all eternity Achilles could never overtake the Tortoise, despite catching up to where the Tortoise used to be an infinite number of times. In a strange way, the explanation for why Achilles can never overcome the Tortoise strikes a logical chord, and yet it confounds every real world experience.
This Zeno guy must have been fun at parties. “Hey, Zeno, you didn’t finish your beer.” “I would have, but I don’t have time for an infinite number of sips.” It would take another 2100 years before the invention of calculus would finally allow Zeno to finish that beer.
Take it to the Limit…
After so many centuries of trying to avoid the mischief and mayhem that infinity creates, mathematicians could avoid it no longer.
In fact, mathematics was so primed and ready for calculus that two different mathematicians independently developed their own flavors of it at about the same time in the latter part of the 17th century. The two were Isaac Newton (who seemed to have nothing better to do with his free time when he was 23 years old) and Gottfried Wilhelm Leibniz (whose philosophical musings would later influence our old friend Kurt Gödel).
The word “calculus” is Latin for “small stone,” and while that small stone continues to be painful in its passing for many high school students, for our purposes it succeeded in obliterating the issues with Zeno and his paradoxes. Through calculus, a methodology was developed whereby infinitely many infinitesimally small things could be added together instantly with a definite (and correct) result.
No longer were great thinkers paralyzed by the impossibility of motion because they could not fathom completing infinitely many tasks in a finite amount of time just to get from point A to point B. Now mathematicians knew how to do it. And not only that, but they were able to prove that what once seemed like trickery and hand waving was robust and real. They finally defeated infinity. Or did they?
The stumbling block for mathematicians in accepting calculus was the concept of a limit. We used this concept a few paragraphs back when defining infinity. While now long accepted, the use of limits was initially seen as cheating – sort of including a fudge factor to get around the difficulty of rigor, and then watching the fudge factor vanish to as close to zero as desired for it to be disregarded altogether.
The problem remained as to whether and under what conditions that fudge factor could be said to vanish, and that problem related to the convergence of infinite series. The question of whether an infinite series could converge to a definite point was the specter of Zeno’s paradoxes rising from the grave. The big difference was that Zeno dealt with motion and distance, which are physical things, as opposed to pure numbers.
Somehow by untethering the concepts from physical objects and tethering them to abstractions like numbers, it became OK to imagine that a process like an infinite sum of geometrically smaller numbers could go on forever and still produces a finite result.
Something about this seemed like handwaving. It felt less than rigorous to dismiss the vanishing tail that went on forever. And yet the results that were calculated in integral calculus or converging infinite sums were correct and repeatable.
The answer was out there, and it was known, but it took an intuitive leap of faith to go from only being able to calculate an approximation (albeit with whatever precision one’s patience allowed) to being able to calculate the answer exactly. And while the difference between the two is that very vanishing amount that we must eventually disregard to adopt the processes of calculus, that tiny difference always remains at any given step grossly out of proportion in comparison to the infinitely infinitesimal.
Nikolas of Cusa, a German philosopher, mathematician, and Catholic priest who lived from 1401 to 1464, held that a finite intelligence (such as our own) cannot attain the fulness of truth, but can only approach it asymptotically.[11] The infinite, including God, remains beyond our understanding. The only way we can grow in understanding towards truth, towards the infinite, and towards God is through attempting to understand the implications of our inability to understand.
Hey, that sounds an awful lot like learning how to embrace paradox.
Whereas we may only be able to approach the truth asymptotically and from a single direction, a falsehood can be stumbled upon in any number of ways.[12] A wonderfully playful example is Ramanujan’s proof that the infinite sum 1 + 2 + 3 + 4 + 5 + … = -1/12.
Yes, you read that correctly – the sum of all the positive integers from one to infinity is equal to negative one-twelfth.
Before we recount his proof of this remarkable conclusion, let’s quickly meet Srinivasa Ramanujan. Born in 1887, Ramanujan was brought up with no formal training in mathematics, but still displayed one of the most uniquely intuitive mathematical minds in human history. Before dying an early death at the age of 32 from tuberculosis, he amassed thousands of results, including solutions to mathematical problems many considered unsolvable.
His mentor was G.H. Hardy, a University of Cambridge mathematician who was a friend and colleague of our erstwhile nemesis Bertrand Russell. Hardy tried to instill in Ramanujan the discipline of rigorous proofs, which Ramanujan disdained. He felt he didn’t need to prove what he could already see was true. But the incredible power of Ramanujan’s mathematical intuition that allowed him to correctly solve unsolvable problems came with the price of often producing incorrect results.
If ever there was a case for human genius being nonalgorithmic, it was Ramanujan.
His proof for 1 + 2 + 3 + 4 + 5 + … = -1/12 was not one of his errors, but instead a playful demonstration that divergent infinite sums can seem to converge in surprising ways when properly manipulated. It goes like this.
(1) Let S be the infinite sum:
S = 1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + …
(2) Then multiply everything by -4 to get:
-4S = (-4) + (-8) + (-12) + (-16) + …
(3) Because these two expressions are infinite series that go on forever, they can be arranged in whatever way we wish, and so we align their terms as such:
S = 1 + 2 + 3 + 4 + 5 + 6 + …
-4S = – 4 – 8 – 12 – ….
(4) Adding these two equations together, S + (-4S), term by term we get:
S + (-4S) = -3S
1 + _ = 1
2 + (-4) = -2
3 + _ = 3
4 + (-8) = -4
and so on to produce:
(5) -3S = 1 – 2 + 3 – 4 + 5 – 6 + …
(6) This looks an awful lot like the following infinite series:
1 – 2x + 3x2 – 4x3 + 5x4 – 6x5 + …
(7) Mathematicians know that this infinite series has a definite sum, namely:
(8) For step (6) look like step (5), substitute 1 for x in the formula of step (7), which gives the result:
-3S = 1/4, and thus S = -1/12.
Are you convinced?
To pull this sleight of hand, Ramanujan had to play fast and loose twice. First, step (3) assumes that infinite sums can be rearranged in any way just like finite sums. This trick, when cleverly executed, allows divergent infinite sums to be made into just about anything. Second, step (8) blatantly violates the equation in step (7), which is only valid for values of x that are less than 1.
That small but critical tidbit was conveniently overlooked, almost like Zeno conveniently overlooked the fact that Achilles indeed overtakes the Tortoise. Even the most unassailable logic can lead to absurdity when preceded by a wrong assumption, intentionally or otherwise. And there are plenty of opportunities to make wrong assumptions when dealing with the infinite.
When it comes to the concept of infinity in mathematics, one of those wrong assumptions is that all infinities are the same size. How could it be otherwise? Everything infinite goes on forever, so how can one infinity be “bigger” than another?
Let’s start by convincing ourselves that all infinities are the same. We can agree that there are infinitely many positive integers, namely 1, 2, 3, 4, 5, … and so on forever. No matter how far I count, I can always count one more. Now what about all the even integers, 2, 4, 6, 8, 10, … and so on? Surely there are only half as many of those as there are all integers?
Not so fast.
If we start listing them out, we will soon discover that every integer in the list of all integers can be matched one-to-one with an even integer; no matter how far out on the list we go, we will always be able to make a match. So, if for every integer in the list of integers there exists one and only one even integer (and vice versa, I might add), then the two infinite lists must really be the same size. Isn’t that odd?
That’s not to say that all infinities are the same size. So far, we have been dealing with what might be called the countably infinite, which are those infinite lists whose members have a one-to-one correspondence with the positive integers. But are there other types of infinity?
Consider all the points on a number line segment from 0 to 1. Do those points have one-to-one correspondence with the positive integers, and how can we know? Imagine making a list of random points along that line segment, such as:
1. 0.6403578100345683705117501750157…
2. 0.1111111111111111111111111111111…
3. 0.9012873465918973651089237001928…
4. 0.0207080405010906000034957350982…
5. 0.7552558553551556554559557552558…
6. 0.2000000000000000000000000000000…
…and so on.
You could keep going on forever coming up with a new point on the list for each subsequent positive integer, and the list would never be complete, even at infinity. How do we know that?
Consider the same list, but highlight each digit in the decimal place corresponding to its order on the list. For example, the first decimal digit is highlighted in the first number on the list, the second decimal digit is highlighted in the second number on the list, and so on, as follows:
1. 0.6403578100345683705117501750157…
2. 0.1111111111111111111111111111111…
3. 0.9012873465918973651089237001928…
4. 0.0207080405010906000034957350982…
5. 0.7552558553551556554559557552558…
6. 0.2000000000000000000000000000000…
…and so on.
If we constructed a decimal number between 0 and 1 that was different at each decimal digit from the highlighted digits for all of the infinite numbers on the list, we would be guaranteed to produce a new point that was not on the original, infinite list!
Take a moment to convince yourself, if you’d like, and pat yourself on the back if you are successful (Gödel used a very similar argument in his incompleteness proofs).
Since this newly constructed number was intentionally constructed to differ in at least one digit from every other number on the list, it must be new. It can then be added to the (infinite) list, and the same process can be followed to find yet another number that didn’t exist on the original (infinite) list.
The consequence is that there are necessarily more points on a line than there are positive integers, even though both are infinite. Some infinities are bigger than others!
And not only are there more points on a line than positive integers, the lack of a one-to-one correspondence between the two means that the points on a line are not countable. We thus have the countably infinite and the uncountably infinite.
Georg Cantor, the father of modern set theory, was the first to give names to these different magnitudes of infinity. The smallest infinity, that of the countably infinite, he called 0, using the first letter of the Hebrew letter “Alef” (). The next order of infinity, called 1, was reserved for points on lines, on planes, and in volumes. Believe it or not, the orders of infinity can keep going, from 2 to 3 all the way to ∞!
I am unfamiliar with why Cantor chose the Alef symbol, but I’m hoping it had something to do with how, in Hebrew, Alef all by itself is unpronounceable. It needs a vowel indicator or another letter before it can be pronounced. As such, the is much like infinity in that it is both something and nothing at the same time. Better yet, the is much like a quantum particle that exists in an indeterminate superposition of states on its own, but upon interacting with another particle, its value becomes known.
…One More Time
Does time exist, or is it an illusion? That may sound like a strange question, but it is something that physicists are grappling with more and more.
Quantum mechanics and relativity conflict in such a way that time appears to be fundamentally not real. But how can time not be real in a universe under constant change? Doesn’t change require time to exist?
It’s important to understand that when physicists say that time is an illusion, they are not saying that time isn’t experienced or that nothing undergoes change. What they really mean is that time is not foundational to physics; time is not a fundamental building block. In other words, time emerges out of something else, and that something else must exist outside of time.
Perhaps time was created by the same event that created the universe. Does that also imply that space is an illusion in the same way as time, being created along with the creation of the universe by something else that exists outside of it?
One of the great mysteries regarding time is why it flows in only one direction. Even Einstein’s general relativity, which demonstrated that time is not absolute but can be sped up and slowed down, left the directional flow of time intact. To us, time seems like being helplessly caught in a river’s current, dragging us along to an as yet unseen destination.
But there is nothing in the vast majority of physical laws that requires a directional flow of time. The laws of physics are time-reversible. One exception is the second law of thermodynamics, which states that a closed system will always tend toward more disorder. This is called entropy, and gives the flow of time its directionality.
If we treat the universe as a closed system,[13] we can envision a point far out in the future when disorder reaches a maximum value. At the point of maximum disorder (maximum entropy), the flow of time will cease as the universe settles into full equilibrium. However, there was already a time when the universe was in a state of maximum entropy, and that was at the very beginning, at the instant of the Big Bang.
So how did the universe go from maximum disorder to the order and increasing complexity that has evolved since?
After the Big Bang, matter formed from energy and ultimately coalesced and organized into stars, planets, moons, and various debris, with the stars grouped into galaxies, and the galaxies groups into super-galaxies. That organization was aided by gravity – that mysterious force that attracts masses to each other instantaneously across large distances due to the bending of space itself.
Curiously, as gravitational pull increases with increasing mass, time slows down. As in the movie Interstellar, a group of astronauts who depart on a mission to rescue a fellow astronaut stranded on a planet under the influence of the gravity of a nearby blackhole might arrive only minutes after their stranded friend even though the rescue took ten years to plan.
While matter organized at the macro level in the form of stars and galaxies, matter also organized at the micro level into different types of atoms (elements), that organized into different molecules, that organized into regular patterns in the form of crystals. From what we can tell, all of that happened pretty much everywhere in the universe.
What didn’t happen everywhere was the emergence of life.
Life required a specialized set of conditions to emerge, but there is nothing to indicate that the specialized conditions themselves were responsible for the emergence. Indeed, no scientific explanation for life appearing from no life has been found.
Organisms began to appear having ever-increasing complexity, giving rise to different body types, which we can loosely consider to be species. While many species share common body traits and organ functions, the species each have a unique construction that differs in multiple mutually dependent ways from the body constructions of other species, making it difficult to postulate one as a mutation of another without some series of intermediate organisms linking them.
Given the nearly ten million different species on earth and extensive fossil records, one might expect the scientific data to be replete with examples of these “missing links,” but the contrary is the case. Among that massive diversity, only one was endowed with consciousness – with a brain that could contemplate itself.
All this – the macro, the micro, and life’s diversity – is to observe that the universe does not present itself as a set-it-and-forget-it enterprise. A creative intelligence has been at work all along the way. How else could a universe with reversible physical laws and a flow of time governed by ever-increasing disorder have developed increasing organization and complexity, and with one species of life (on this planet, anyway) capable of contemplating it all and of participating in their own small way in the ongoing creation?
Creation represents its own flow of time, constantly renewing and regenerating what entropy works to decay and corrupt. This is the repeating story of humanity that we find in the Bible. Whenever there is destruction, God creates something new and better. Where there is evil, God brings forth a greater good. Where sin abounds, grace abounds even more.[14]
Creation means that time has a beginning. Entropy means that time has an end. While having a direction and a flow, time is finite on both ends. These two forces are seemingly at odds until we realize that entropy works in service of creation as the death that brings life. This is the paradox of time, and the only synthesis can take place outside of time, in eternity.
Time and space are by-products of the created universe (of matter and energy). They are real to us only because we cannot get outside of them. But without them we could only be and not become, exist and not grow, be dead without resurrecting. As Chesterton said in Orthodoxy, any dead thing can go with the current – it takes something alive to swim against it.
This all brings us back to our true nature as creative creatures. The Creator’s creatures have a purpose, and the Creator grants the gift of free will, desiring that the creature will choose to willingly cooperate in that purpose.
At the same time, the creature experiences conflicting urges – to become and to resist becoming. Like creation competing with entropy, we all yearn to fulfill a purpose while experiencing a default tendency, like an inertia, to fall back into negation. Of these competing forces, one advances order and one advances chaos; one imbues meaning and one resigns to randomness.
As Dorothy Sayers articulated, “His creature simultaneously demands manifestation in space-time and stubbornly opposes it; the will of his universe is to life as implacably as it is to chaos.”[15]
This creation/negation tension captures the paradox of time, and calls to mind the following lines from the Dylan Thomas poem, Do not Go Gentle into that Good Night:
Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.
When we rage against the prospect of death, we are denying the reality of temporal life, specifically that it will end. Raging against death is not affirming life, but denying life, because raging only makes sense if all is random chaos, devoid of meaning.
Only by embracing the reality of time, the brutal decaying force of entropy, and the unavoidability of our impending death are we open to the regenerating powers of creation that lead us to our eternal existence free from the prison of time.
[1] For our purposes, I will use the term “integer” to mean the natural numbers, that is all positive integers greater than 0, namely 1, 2, 3, 4, 5, ….
[2] This was the Taniyama-Shimura-Weil conjecture, posited in 1955, and having nothing to do with Fermat’s Last Theorem.
[3] See, for example, https://blogs.egu.eu/divisions/cl/2017/08/01/of-butterflies-and-climate/
[4] Lorenz, E. N., “Deterministic Nonperiodic Flow,” Journal of Atmospheric Science, 20(2), 130-141 (1963).
[5] Douglas Hofstader, I Am a Strange Loop.
[6] Sayers, The Mind of the Maker, p. 213.
[7] “[A]ll things work together for good for those who love God, who are called according to his purpose.” Romans 8:28, NRSV-CI.
[8] This bears some similarity to how the term “lightyear” is misused as a measure of time when it is actually a measure of distance.
[9] Thirty quadrillion is roughly the number of atoms in a single grain of sand. Another way to say thirty quadrillion is 30 million billion.
[10] Attributed to St. Teresa of Avila, a Carmelite nun in Spain who lived from 1515 to 1582.
[11] Carl B. Boyer, “The History of the Calculus and its Conceptual Development,” p. 91.
[12] “It is always simple to fall; there are an infinity of angles at which one falls, only one at which one stands.” G.K. Chesterton, Orthodoxy
[13] Opinions vary about whether the universe is a closed or open system, whether it is finite or infinite, whether it is curved or flat, and whether it will expand forever or will contract. For our purposes, it may not matter. However, to the extent it does, we will assume the universe is a closed, finite system that has some curvature, and leave open the question of whether it will keep expanding or whether it will contract.
[14] Romans 5:20
[15] Sayers, The Mind of the Maker, p. 141