Some Points on the use of Infinity in Mathematics

I recently listened to an interesting interview with Norman Wildberger. This inspired me to note down some additinal points on the use of infinity in mathematics. Many of the points advocated by Prof. Wildberger in this interview, as well as the points below are important for a solid foundation for my own work on the foundations of computer science. Both mathematics and computer science are ought to be clear, reality-oriented sciences, that we can confidently utilize to prosper.

1. Fallacy of Reification

It is a common error in my field (computer science) and in mathematics to commit the fallacy of reification — i.e. to treat something that is not a thing as a thing. For example, potential infinity, or limits approaching infinity, is actually a process (ref. Aristotle). But, if we treat this process as having reached the end, then we commit the fallacy of reification.

The digits of pi can be computed using various formulas. Replacing the formula with a single sign π has the effect of pushing people into the fallacy of reification. The symbol looks like a thing, but it is actually a process / formula. A good example of this is the formula for the circumference of ellipses, then it is easier to see that it is a process.

Interestingly, this fallacy goes for the concept “nothing” as well. And, “Nothing” and zero are closely related. If I say: “Nothing can hurt me.” Then I don’t mean there is an object “Nothing” that can hurt me, that is ridiculous. That is reifying the absence of something as a thing. 

I think many issues of zero in maths relate to this as well.

In the medieval times, thay had a debate around the statement “Nothing is more powerful than God.”, because then the object “Nothing” was actually better than God, and that was blasphemous, some argued!

2. Parameterizing precision

One role of potential infinity in formulas is to parameterize precision. In other words, for a formula of pi, we can produce the number of digits we need in our practical application. The role of the infinity symbol is actually to indicate that we can compute as many digits as we want. But, the usage of the infinity symbol ∞ pushes people into thinking it is a thing.

A thought experiment: If notation had been different, then this might not have been as tempting. Say we parameterized one of the formulas for computing pi with N instead. Then it would be a convention that the N could be replaced by any positive integer, and we would get that many digits. Maybe better: to further parameterize it with an error-bound, then we get the number of iterations required to reach a value within that bound.

3. Knowledge is power

Many think the purpose of knowledge is to enjoy the contemplation of it (advocated by Aristotle). However, the actual purpose of knowledge is rather to gain control over the world so we can prosper (Francis Bacon). Thus, this puts limits on mathematics, that it should guide some practical purpose.

We today have good practical applications from roughly the atom to the solar system. This range might expand in the future, but billions of people can live happy, prosperous lives for millenia knowing only enough mathematics for that range. And in that range, for all practical purposes pi = 3.141592653589793238. (The value bundled in Intel processors, by the way.)

A bold approach might be to simply set bounds on some parts of mathematics, and then say that if we check this range, then for all practical purposes, this and that theorem is true. This would have to be done carefully, though.

Also, one can criticize someone for thinking about issues very far removed from the possibility of practical applications as wasting their time on nonsense.

4. Practical results come from studying the real world

If we look at the history of science, then there is often undue credit given to “pure” researchers, while those who properly studied reality get too little credit.

An example from my own field is Babbage and Turing. Turing is given a lot of credit for what was actually done by Charles Babbage. When we look at Babbage, we find a scientist with a huge interest in productive activities and the real world, while Turing was more interested in far-fetched ideas, such as actually infinite computers (his 1937 paper), or computers with magic abilities (his PhD-thesis, “oracle machines”).

Such things boost the image of “pure” science, while lessening the image of more reality-oriented scientists who did the actual work.

If credit is given where credit is due, I think we will see that studying reality and real application is what drives science to produce productive results.

Also the pure-applied distinction is invalid. There is only science, and science is based on reality. If one fantasizes about things outside reality, then that is properly classified as fantasy, and that is not unusual for people to engage in!


If we analyze a number of mathematical theorems and conclusions in this framework, then I find them to be less mystical. For example, Cantor’s famous proof is simply committing the Fallacy of Reification of infinity, so the continuum hypothesis loses its basis, and so on.

Leave a Reply

Your email address will not be published. Required fields are marked *