Chapter 3

Some mathematical problems

Perhaps because of this, mathematicians are interested in what they call "closure" and "completeness." As I mentioned in the preceding section, a system is closed when any legitimate operation on an object in the system will keep you still inside the system; and it is complete when any statement in the system follows somehow from the axioms. (In case you are wondering what a "mathematical statement" is, it is the affirmation of some relation among the objects. For example, 2 + 2 = 4 is a mathematical statement. You can see that it can be called true, because it is consistent with the axioms of number theory.)

But since mathematics makes statements and uses logic to draw its conclusions, it would not be surprising to find that it is possible to construct indirectly self-contradictory statements in a particular branch of mathematics. For instance, in set theory, you can talk about "the set of all sets that are not proper subsets of themselves." (A proper subset is, basically, a subset that doesn't contain all the members of the set it is in; if it has all of them, it is improper.) If the set above is not a proper subset of itself, then of course it is one of its members; but since there are the other member sets, this would make it a proper subset of itself, which would exclude it as a member. Something that contradicts itself even by implication obviously has to be ruled out as an object. Ruling these out would not make the system incomplete, any more than referring to an "object" that violates the definition of the object, such as trying to argue mathematically from the Trinity, which is one and also three. This cannot be a mathematical object in number theory, because the "one" of number theory excludes "three."

But the search for completeness leads to one of the interesting conundrums of mathematics. Some years ago, a man named Kurt Gödel showed that, in any mathematical system that was complex enough (which included, of course, all the major areas of mathematics), a mathematical statement existed which was the equivalent of "This statement does not follow from the axioms." And, of course, any system with that statement in it is by definition not complete.

This is not one of those indirectly self-contradictory statements, because there is no necessary connection between a statement's being meaningful and its following from the axioms. That is, there is no intrinsic necessity for saying that every statement that is consistent with the axioms has to be implied by them.

I think the reason is that implication is a logical relationship; and the statements are related to the axioms by logic, not by the relationship that forms the basis of the axioms themselves. And there's no law of logic I know of that says that logic as actually used has to be a closed system.(1)

And when you think about it, to say that something depends on something else (as an effect depends on a cause, or--in this case--a conclusion depends on its premises) doesn't imply that relations between dependents also have to depend on something. So the fact that theorems in mathematics are meaningful statements that depend on the axioms doesn't imply that all meaningful statements that can be made are theorems.

Nevertheless, mathematics wants to make its system as complete as possible, so that the axioms will indeed imply "practically all" statements made in the system. And, of course, it wants its system closed, so that whatever is done in accordance with the axioms will remain a meaningful statement.

In discussing this in the preceding section, I mentioned how the various kinds of numbers were created to preserve closure. Any kind of mathematics with pretensions to applicability also wants to preserve the two kinds of statement equivalent to affirmation and denial within its system; and within the system the equivalent of a denial is called an "inverse" of the statement or operation in question. Thus, in the number system, subtraction is the inverse of addition (and vice versa, of course), division the inverse of multiplication, taking the root the inverse of raising to a power, differentiation the inverse of integration, and so on.

And what was discovered is that it's one thing to have a closed system on one operation; but to have it closed on the operation and its inverse is something else again. So the integers were invented to close subtraction; but this created the number zero, which was neither positive nor negative, but was needed to take care of performing the inverse operation on the same number (as 3 - 3 or -3 + 3). In order to include this number in multiplication, the rule was made that any number multiplied by zero gave a result of zero (because multiplication by 1 gave you the number itself). Everything was fine with respect to multiplication now, but the inverse meant that zero divided by any number would have to be zero (because its inverse would be zero times that number--take the result of the division, zero, and multiply it by the number, and you have to get the original number, zero). But then, what about dividing by zero? In all cases but one, it can't have an inverse. That is, take 7 divided by zero. What would it be? Not 7, because the inverse (7 times zero) is zero, not the 7 you started with. Not zero, because zero times zero is zero, not 7. And certainly not any other number.

So mathematics had to throw up its hands and say, "Division by zero is forbidden." The attempt to close the system and keep the inverses had resulted in an operation that gave meaningless results.

Nevertheless, there is one division by zero that is not meaningless, because its inverse gives a result: Zero divided by zero. The trouble with this is that you can assign any number you want as its quotient (result, for those of you who have forgotten your division), and the inverse will work. For instance, 0/0 = 322. Well, 322 x 0 = 0. So it works. Hence, the operation in this case is meaningful but indeterminate.

Why am I bothering with this? Because it turns out that there is an application for it which was discovered by Isaac Newton and Gottfried Leibniz more or less at the same time but independently of each other--and each of them developed the system from abstraction from its applications (Newton from investigating motion, Leibniz to show that his theory of monads worked), and mathematicians ever since have been racking their brains to show why it is mathematically legitimate.

I am talking about the differential and integral calculus, of course. Let me give you the standard justification for it, which mathematics has more or less settled on, and which is riddled with inconsistencies: the notion of the "limit."

The idea of the limit is that if a given result of a mathematical operation gets closer to a certain number (or stays at that number) as the objects operated on get smaller and smaller,(2)

then it makes sense to say that you know what the result would be if they actually got to zero. Their actually being at zero is ruled out for one reason or another by the laws of mathematics (such as its being illegitimate to divide by zero); but if it did make sense, we know what the answer would be. That answer is called the "limit."

Now mathematicians talk about getting "really close" to the limit by being in the "epsilon neighborhood" of it. By this they mean "Take a really tiny number--and I mean really really tiny, and call it 'epsilon,' and I'll show you a 'delta' which is even smaller." It's supposed to be a number so small that its variance from the impossible one is so close as to make no difference; and if the result is all right at this range, then "that's good enough for practical purposes."

Mathematically, of course, that's nonsense. No matter how close your point is from your target point on a line, you still have just as many points as are in a line a hundred miles long between you and the target. I could prove this, but ask your neighborhood mathematician to do it for you. Any line has an infinity of points in it, just exactly as many as the points in any other line.

And the limit is an exact number, not a very close approximation. Let me refer back to a case where the limit is approached as the number becomes larger: the supposed mathematical "solution" to Zeno's paradox about crossing the room that I talked about in Chapter 5 of Section 3 of the second part 2.3.5. There, you will recall, the argument was that to cross the room, you first had to go half way, then half of the rest, then half of the rest, and so on; and you can never get there, because you still have half of the remaining distance to go no matter what point you've reached.

I solved that paradox there by saying that the motion across the room was one act, not a series of starts and stops; but what I am interested in here is why the concept of the limit doesn't solve it, even though some mathematicians who don't understand what the limit means think it does.

Now the distance to the other side of the room is the whole distance (corresponding to the number 1), and this is broken up into the series (1/2 + 1/4 + 1/8 + 1/16 + ... + 1/2n + ...). If you look at the sums at each stage, you see that they are 3/4, 7/8, 15/16, ... n-1/n ...; so that the larger n becomes, the closer the fraction is to 1. The limit, therefore, of this series "as n becomes infinite" is 1.

"Therefore," say the mathematicians, "you can get there." No you can't, I answer. The limit is the definite place you can't get to and can't get beyond; though you can get as close to it as you like.

That is, you could get to the limit if this number meant anything: . But (called "infinity") is just "the last number," and the number system is defined in such a way that there is no last number. It is not speaking properly to say that the numbers in the series "approach infinity," as if it were a number to be approached, but that they "become infinite," meaning that they just keep getting larger and larger without stopping. So that "number" is just a sign of a process, not a number at all. I'm speaking within mathematics here, not commenting on it; any mathematician would agree with what I am saying. Zero is a number but "infinity" isn't.

But since you get closer and closer to 1 as the numbers in the fraction "become infinite," then 1 is the place you would get if it were ever possible to get there (which it isn't). So you still can't get across the room. The only thing the limit says is that the other side (and not, say, the ceiling) is the place you can't get to. So Zeno's paradox is only defined, not solved, by the notion of the limit.

Similarly, if you are traveling at 32 miles in one hour, you're traveling 32 miles an hour; if you keep going a half hour longer and you go 16 miles farther, you're still going 32 miles an hour; of you go for a quarter hour and do 8 miles, you're still at the same speed--and so on. If, as the time of your travel gets shorter and shorter, the ratio between the distance and the time (the speed) remains the same, even when the time gets down into nanoseconds, then we can safely assume that you're keeping a steady pace. So what speed are you traveling at a given instant?

Well, if you consider a speed a distance divided by a time (it isn't actually, as I said in Chapter 6 of Section 3 of the second part 2.3.6, but we're doing mathematics here, not physics), then you've got a distance which is "infinitesimally small" divided by zero. No you haven't. If your distance is anything but zero, then the time is not zero (an instant) but one thirty-second of the distance (which would be a finite number). Hence, your "infinitesimally small" distance has to be a zero which in this case is the zero which is thirty-two times as great as the zero in the denominator.

What are you saying? Zero x 32 = zero, of course. But that means that the zero on the right-hand side is a zero which is thirty-two times as great as the zero on the left-hand side.

But that's nonsense, isn't it? No. Divide the zero on the right by 32 (that's legit; it's the other way that's forbidden); you get zero (the particular zero that is one thirty-second of the numerator).

Remember that I said that zero divided by zero is meaningful but indeterminate? Well in special cases like this, where 0/0 is the limit of some "continuous function" (something that boils down to a series more or less like the one I described), then the zeros are defined in relation to each other, and the result is a definite number based on the ratio. Obviously, if you're traveling at a steady 32 miles an hour, you're traveling that speed at any instant of your journey--as you can check by looking at your speedometer, which measures instantaneous velocity, as I said in Chapter 5 of Section 3 of the second part 2.3.5, not some ratio of distance to time.

That's why the calculus works, not because of some "epsilon neighborhood" you get into. 0/0 is an exact number in these cases, not a "very close approximation to something that is meaningless," because in these cases--and only in these--it has meaning.

So given that zero divided by zero is defined in the cases spoken of in the calculus, that means that there is a whole field of numbers you get into in this process, and that you can get out of by integration. That is, these numbers would be something like the negative numbers you get into by subtracting a larger from a smaller number, or the square roots you get into by taking the root of something that's not a perfect square, or the imaginary numbers you get into by taking the square root of a negative number.

Since I have discovered this field of numbers as a field, even though it's been in use already, and since I have shown how you get into it and out of it, I now claim the right to name it:

The philosophical numbers are the numbers entered into by dividing zero by zero when that is defined or in general by following the rules of the differential calculus.



The beautiful numbers are the number system that includes the real numbers, the imaginary numbers, and the philosophical numbers. That is, all the numbers known up to the present.

I will leave it to the mathematicians to work on the number system in the light of this approach. I do think it should make the calculus less of an anomaly than it is at present.

So that's one paradox in mathematics that I think I have been able to do something to solve.

I think, however, that there is another paradox that is due to an implicit taking of a word in two senses, leading to strange results. I am speaking of the theory of infinite sets (and by implication all that follows from it).

An infinite set is one that is cardinally equivalent-- in ordinary language "equal," though it's technically defined, of course (see below)--to a proper subset of itself. We saw "proper" and "improper" subsets above, and to refresh your memory, {1, 2, 3} is a proper subset of {1, 2, 3, 4, 5}, while {5, 4, 3, 2, 1}, for instance, would be an improper subset of it (the arrangement of the members doesn't matter). A set is "cardinally equivalent" to another if you can match up each member of one with one and only one member of the other. Thus, {a, b, c, d, e} is cardinally equivalent to {1, 2, 3, 4, 5}. This is what is meant by "equal" or "having the same number of members as" in set theory.

Now then, if you take the set of the natural numbers, {1, 2, 3, ... n, ...}, you can match this up with the even numbers, {2, 4, 6, ... 2n ...} by the rule implied in the "2n." Since every number has a double, then for any member of the natural numbers, there is one and only one even number that corresponds. Hence, the set of the natural numbers is equal to (cardinally equivalent to) the even numbers. But of course, the set of the natural numbers contains both the even numbers and the odd numbers; and so there are members in it that are not in the even numbers--even though "cardinal equivalence" obviously means that there are the same number of members in both.

Instead of saying, "Wait a minute! We have to have contradicted ourselves somehow!" mathematicians have said, "Well, this is just one of the odd things about infinite sets: that they have the same number of members as part of themselves.(3)

All sorts of bizarre conclusions can be drawn, once you accept that everything is all right with this theory. For instance, the double of the set of the natural numbers is equal to the set; the square of the set of natural numbers is equal to the set. Adding one to the set makes the set equal to what it was. (because you can match the additional 1 to the 1 of the original, and every number from there on to n + 1 in the original).(4)

All very fascinating; but I think that there's a hidden contradiction in the core of set theory; and I don't think that you can really talk about "the set of the natural numbers" as a set. Why? Because you are talking about the set of all the natural numbers, and the natural numbers are so defined that "all" in the sense you'd have to be talking about it has no meaning.

There are, as I said in the preceding section, two senses of "all." The first is the collective sense, in which you would say, "All the members of the class weighed exactly one ton." Here, you're taking "all" in the sense of "all, taken together as a unit." The second is the distributive sense, in which you could say, "All the members of the class are human beings," which is the equivalent of "Every member of the class is a human being." Here you are talking about the members individually, but none of them lacks the property you are attributing to them.

Connected with "every" is "any," which means, "pick out a member at random, and it will have the property I am speaking of." This is obviously an implication of "every"; if every member of the class is a human being, then any member of the class is a human being.

Now then, in talking about the set of the natural numbers, for instance, it has to be defined accurately. And is is defined accurately by {1, 2, 3, ... n ...}. The dots say, "proceed in this fashion" (in this case, by adding 1 to the preceding number); and the "n" says "do this for any number" and the dots after it say, "keep going." So now, you can tell whether any objects in the universe (even the mathematical universe of numbers) belongs to the set or not. For instance, 2/3 does not belong to the set, because it can't be got by adding 1 to a whole number. On the other hand, 753,826,714 obviously belongs to it.

Now in defining the set this way, have you defined all the members, or even every member? You have if "all" means "I have a rule by which I can tell whether any object I meet belongs to the set or not; and I have another rule which tells me how to get any member of the set I want, and another rule which tells me to keep finding members."

But I submit that "all" means more than this, and you can see what I am driving at by considering the statement, "All the members of the class weighed exactly one ton." The point is that the numbers are so defined that "all" in this collective sense of "all taken together as a unit" has no meaning. They can't be taken together, because every number has a number (in fact an infinity of numbers) beyond it, because it is a property of any number that 1 can be added to it. Hence, you could never get through the numbers, and so the "keep going" rule can never be fulfilled.

Now I don't mean "never" in the sense of "not in any finite time" here, meaning merely that if you kept going until the heat-death of the universe, you wouldn't have finished. What I mean is that each time you add 1 to a number, you are just exactly as far away from completion as you were before you did it. Thus, "finishing" is not something that simply cannot ever in practice be accomplished, or even approached, really you're always just as far away from "it" as you ever were; it is something that is self-contradictory.

This is similar to the notion of the limit, which I spoke of earlier, and which might make what I am trying to say clearer. In the case of the series which approaches 1 as a limit (1/2 + 1/4 + ... + 1/2n + ...), I mentioned that this corresponds to the set of sums {1/2, 3/4, 7/8, ... n-1/n ...}. Now if you say that "if you add up all the members of the series, you'll get 1," what you are now saying is that the last member of the set of sums is 1. But clearly this is impossible, because there is no "n" such that "n-1/n = 1" is true. Hence, the limit precisely cannot be attained, because it is meaningless to talk about all the members of the series.

But that same sense of "all" is what you mean by talking about "all" the members of a set. You have a rule which defines "any" member and another one which tells you to keep going; but as above, that rule does not define "all" in the collective sense. In order to have that you need an additional statement or rule that will tell you "and there are no more." Not no "others," because that means "of a different type," and would be excluded by defining "any"; but no additional ones of this type. In other words, in order to define a set, in which the members are to be taken collectively together as a unit, you have to have a rule telling you when to stop including members in it.

So I think that there is something in the relation of "belonging to" that makes infinite sets out of the question; and I think my little demonstration about the sum of an infinite series corresponding to an impossible member of the set of sums shows that the difficulty is real, and that it is connected with the notion of "all" as used in set theory(5).

Where does that leave us? It seems to me that if what I said is true, you can't really talk about "the set of the natural numbers," any more than you can talk about as a number; though you can talk about "the natural numbers" in a kind of rough-and-ready loose sense, just as you can talk about "infinity" in a loose sense and use that symbol to refer to "it," realizing that in both cases you are talking about a continuous operation rather than the result of one. That is, since we know that "the natural numbers" are 1 and any number that follows by addition of 1, we can talk about "1 and 'all' the integers greater than it" as long as it is recognized that in the strict sense this is meaningless.

This concludes all that I have to say about mathematics. The subject is obviously very complex, but I leave it to the mathematicians. All I was interested in showing here is what kind of thinking and reasoning process goes on in mathematics--and based on that, how some of the apparent contradictions in the system can be solved.


Notes

1. In point of fact, that was what contemporary logic was trying to create; but, as I tried to show in the preceding chapter, contemporary symbolic logic cannot be applied to actual statements without claiming that some manifestly false statements are true.

2. Or larger and larger, but we are interested in increasing smallness of the objects here.

3. This notion of "infinite" in contemporary mathematics, I hasten to say, has no connection with the sense of "infinite" that I have been using in talking about God. Since quantity is a limit, then quantitative terms such as numbers simply do not apply to God; and if trying to apply them (as, for example, talking about the Trinity) gets you into self-contradictions that sound like the paradoxes of non-finite mathematics, that is coincidental.

4. Interestingly, in case you're curious, the set of the real numbers (the integers, the fractions, and the square roots) is not equal to the set of the natural numbers, because there's no way you can match up the square roots with natural numbers; there will always be some left over. There is a proof for this, which I forget at the moment, which shows that the attempt to do the matching involves a contradiction. So there are in fact smaller and bigger "infinites" in infinite set theory.

5. Note that "all" as used here is a philosophical term, not a mathematical one. That is, you can define "all" in a given mathematical scheme; but then you have to use it consistently with that definition. If you define "all" to mean what is meant in ordinary language by "any," you can't use it in the collective sense of "all taken together as a unit." And I submit that this is what mathematicians are in practice doing, whatever they say they are doing. So no, Humpty Dumpty, when you define a word, it may mean just what you want it to mean, but then you can't use it as if it meant something different.

Next