Part Four

Modes of Thought


Part Four deals with the various types of human thought: mystical experiences and "altered states of consciousness," logic, mathematics, science, esthetics, humor, and values

Section 1: Mysticism 1
   Chapter 1: The different kinds of thought
   Chapter 2: Empty consciousness
   Chapter 3: Altered states of consciousness
   Chapter 4: Absolute consciousness
Section 2: Formal logic
   Chapter 1: The different kinds of logic
   Chapter 2: Logic and truth
   Chapter 3: Propositions and their parts
   Chapter 4: Operations using a single proposition
   Chapter 5: Compounding propositions
    Chapter 6: Compounds using subjects and predicates
   Chapter 7: The categorical syllogism
Section 3: Mathematics
   Chapter 1: The different kinds of logic
   Chapter 2: The foundation of mathematics
   Chapter 3: Some mathematical problems
Section 4: Science
   Chapter 1: Logic and the real world
   Chapter 2: Observation and hypothesis
   Chapter 3: Experiment
   Chapter 4: Theory and verification
Section 5: Beauty and art
   Chapter 1: Esthetic understanding
   Chapter 2: Emotions and objectivity
   Chapter 3: Esthetic facts and beauty
   Chapter 4: Beauty and art
Section 6: Humor
   Chapter 1: Is humor just nastiness with a smile?
   Chapter 2: What is humor?
   Chapter 3: Types of humor; satire
Section 7: Values
   Chapter 1: Values vs. morals
   Chapter 2: Goals and values
   Chapter 3: Essential acts and necessities
   Chapter 4: Kinds of values


Section 1

Mysticism


Chapter 1

The different kinds of thought

Well, then, where are we? The promise early in the First Part that we would discuss what it meant to be a being who had conscious factual knowledge has been in part redeemed; in the fourth chapter of the fourth section of the third part 3.4.4 we saw what this implied for the life of the human being.

But we have still not discussed the different types of factual knowledge we have, and how this information is arranged so that we can derive new relations from those we have already seen; and that is what this part is about.

This part really has two aspects: the different avenues through which we get information (distinguishing mysticism, perceptual experience, and esthetic knowledge) and various ways in which we arrange this information so as to understand new facts based on it.

This latter, of course, is called reasoning, and the rules for arranging data so that new facts emerge is called a logic. What is ordinarily called "logic" or "formal logic" is actually the logic of statements, or linguistic expressions of perceptually based facts; but there are many other logics--in fact, there is a unique logic, really, for every discipline that deals in any kind of factual information. Insofar as this information is translatable into perceptually based factual statements, then the laws of formal logic apply; but each discipline has its own special rules for arranging data, and it does not follow that the logic of the statements that describe what it is doing is the same as the logic of how it is actually handling the information.

That is, I don't think that mathematics is a kind of subset of formal logic, any more than the logic of physics is a subset of mathematics, even though physics uses mathematics and mathematics uses statements. I think that people most often misunderstand a discipline, in fact, when they try to translate its logic into a different kind of logic that they are familiar with--which is one reason, I suspect, why so many have difficulty with mathematics. Actually, if a person can grasp what the logic is of the particular discipline he is studying, he can know more of it faster than the person who knows many many more of the details. But I may be saying this, of course, may be because I know so few of the details about anything.

In any case, in subsequent sections, we will see a little bit of certain classes of logics.



Chapter 2

Empty consciousness

But first, let us talk about the type of consciousness that involves no reasoning, and in fact no concepts: what is called "mysticism," of which there are several types. I think I would like to include here things like hypnotism and the possibility of possession by spirits, and mention other "altered states of consciousness", because many of these forms of consciousness seem to be a special kind of knowing, but I think that in most cases, any apparent increase in knowledge is a deception. I have mentioned these subjects in passing earlier; but I think a somewhat more extended treatment is in order.

The first kind of mystical experience is actually intellectual, but it has no content, and is analogous to the blackness one sees when looking into perfect darkness--which, as I mentioned under sight in dealing with the sense faculty in Chapter 5 of Section 2 of the third part 3.2.5, is a special form of the self-transparency of the conscious act which is aware of itself but is reacting to nothing.

There is a normal experience similar to this blackness in the intellectual realm, as I hinted when dealing with abstraction in Chapter 4 of Section 3 of the third part 3.3.4: the experience of puzzlement. In that experience, a sensation of some kind has turned on the spirit in its function of understanding, and it is examining the sensation to find a relationship to understand--but it doesn't understand one as yet. The experience is that of expectation; one knows that one doesn't understand, and at the same time one knows (or hopes) that one will understand. The spirit, of course, when it is active, is immediately aware of its own activity, as I said as early as Chapter 11 of Section 1 of the first part 1.1.11, and discussed more at length in Sections 2 and 3 of the third part. 3.2.1 3.3.1

But in this experience, understanding thinks it does not know and has an expectation of knowing a concept; and in this its knowing-unknowing is different from the mystical experience I am now speaking of.

Empty consciousness is understanding's awareness of itself when it has deliberately refused to know any relationship.

Puzzlement involves an attempt to go beyond the state understanding is now in; empty consciousness rests in it, and deliberately tries to preserve it. It is understanding, therefore, since it is an act of the spirit and not a sensation, and it understands itself as knowing; but it precisely knows nothing. However, if this is a kind of terminal phase for the spirit, it understands this "nothingness" it knows in a positive sense, and not as the equivalent of the statement, "I do not know anything." It is as if the nothingness is some kind of object for it, a kind of "non-object" object where all distinctions disappear (because a distinction is a relationship), and the mind is transported into another realm entirely, to which conceptual consciousness is completely foreign.

Achieving this state is actually very difficult, because understanding is extremely ingenious in finding relationships, because that is its nature. As we saw in Section 3 of the third part, but more especially in Section 5 of the first part, the function of understanding is to enable material spirits, who are affected by outside energy in such a way that some of these energies have a conscious (and subjective) "dimension" to them, to achieve the only kind of objective knowledge possible for such beings. Hence, in one sense, understanding, in knowing only itself, is aware that this is not its natural condition, nor is it a condition in which it knows something objective; and so it normally tries to get itself out of this condition in any way it can.

And it is for this reason that the non-Confucian Eastern philosophies, which have this experience as their goal, involve years and years of training--and training, not in studying anything, but in achieving "purity of spirit." The breakthrough into this consciousness cannot even be achieved for a very long time; and it takes many more years before it can be sustained for more than a few seconds or minutes.

The object of this training, if you look at what is going on in yoga and the various types of Buddhism like Zen, is to do something so that the student will concentrate on some individual object to such an extent that he "thinks away" all relationships, and just has it before his spirit--or rather, since having it "before" his spirit implies a dichotomy between the two, that his spirit becomes totally absorbed in this one individual thing (and does not even think of it as "an individual thing," because that too is a relationship) and this thing becomes absorbed in the spirit's attention.

Yoga achieves this by first doing many physical exercises; but although those who play with it use it for the benefit to the body, this is by no means the purpose of the exercises. They have as their function to get the body under complete mental control and relax it absolutely so that it and its needs do not get in the way of spiritual activity. This is why one of the characteristics of all the contortions of the body is that they are to be done slowly and calmly, regulating the breathing while they are being performed. They produce flexibility and pliability also, so that no position of the body results in pain, or the mind will be distracted in its contemplation. Further, each of the positions of the body is given a symbolism, which helps the early learner realize what Yoga thinks its reality is: a shadow of the mind, and an insertion into the world of illusion.

Hence, the purpose of the yoga exercises is not to develop the body, but to free the mind from any dependence on the body. This is why the most important yoga exercises are not those of stretching, but the breathing exercises and the control of things like heartbeat. The idea is that if a person can put himself into a state analogous to a hibernating bear, but keep his mind active, then he can engage in mystical contemplation most successfully.

Zen is an attempt to shortcut such procedures and go directly to the contemplation of empty consciousness. What seems to be behind this is that the exercises of disciplines like yoga involve too much concentration on the body for too long, and too much symbolic knowledge, which ultimately has to be thrown aside. Why not aim directly for the goal and simply do the mental work of thinking away all relationships?

But Zen has its own difficulties. First of all, you obviously can't learn it from books, because they present you with facts; you have to be led toward contemplation by someone who has achieved it. And since it involves understanding nothing at all, and naturally the student wants to know what he is supposed to be thinking about, the master's training has to be very devious: he must somehow show the student that the attitude of mind of asking questions is the attitude that leads away from the goal he wishes to achieve; and so questions are sometimes answered in an absurd way, even to being slapped and so on.

The student, of course (since the master tells him nothing), is puzzled, and thinks for a long time that he is asking the wrong kind of question; and it is only after a very long time that it dawns on him that his job is not to ask questions or seek answers, but simply to think without thinking about anything at all. Since this seems to the uninitiated a total waste of time and something involving the very opposite of wisdom, the student's mind naturally resists it, and he must somehow be led to overcome this recalcitrance and actually to try thinking but not thinking about anything.

And of course, until you actually do this act of totally emptying your consciousness of everything but the act of consciousness itself, you do not get into this completely different type of consciousness which will appear, once it is achieved, as absolute wisdom. If the slightest concept is there in consciousness, then of course, understanding knows some abstract fact, and is not confronted with understanding in its absolute nakedness--in the same way as seeing any light at all destroys the volume of blackness in seeing nothing. Hence, no matter how far along the student is in his practice of Zen, he is absolutely nowhere until he has--at least for a brief moment--actually reached the goal.

Of course, once he has reached it for even an instant, then the world disappears and simple, contentless knowledge is known, but not even known as knowledge, since this too would be a concept; and it is then that he realizes what the master has been trying to teach him, and knows more or less how to achieve it; and from then on the problem is how to get it again and sustain it for longer and longer periods.

Let us look at this consciousness for a moment.

Since it has no concepts in it at all, but only the contentless understanding-of-understanding, then it is not surprising that those who try to describe it to others do so in very mysterious terms. One who has it understands that all is one, and that he is one with everything, that everything is nothing, that nothing is being and that being is nothing, that I am the whole universe, that there is no "I" over against everything, and that all conceptual experience with its abstractions is laughably insignificant as "knowledge" in comparison to this absolute, all-encompassing wisdom.

It is understanding, as I have pointed out, and can't be called false, because it understands its own act, and all the phrases above are recognized as false and totally inadequate expressions of the act. And since it deals with nothing at all (but not as such), then it is very hard to convince a person who has had it that it doesn't simultaneously deal with the "real truth" about absolutely everything (because everything and nothing have the common characteristic of being undifferentiated--and in this experience they merge into unity, because you are precisely refusing to make distinctions).

But if my analysis is correct, this Buddhist nirvana is not only not absolute wisdom, it is absolute unwisdom; it is not only not the knowledge of everything, it is the absolute minimum below which there would only be unconsciousness; far from being the expansion of consciousness to encompass infinity, it is the contraction of consciousness to the least it can possibly be and still know.

And that this consciousness is regarded as the real truth is the explanation of why the Indian philosophies hold that the world in which we live is a world of "illusion," and that the really real world is the world "behind" it which we can discover through this mystical experience. It also explains why acting (karma) is a dirty word in these philosophies. When you act, as opposed to contemplate, you tie yourself down to the world of objects and make yourself simultaneously a subject over against them and an object in this world; and since all of this is a dream, not reality, then you have to get rid of this way of behaving if you are ever to escape from the wheel to its center.

Clearly, this view of life is the exact antithesis of what I hold in this book; instead of formulating finite goals for yourself and seeking them in this life, you are to give up all goals and make your goal the goal of not seeking anything finite, but contemplating the All that is Nothing. You are not to be concerned about anything that happens in this world, though you have infinite compassion for it (in that you look down on it as illusion); but you realize that none of it is any more real than a nightmare--and so you are uninvolved in everything.

Now it is true that, from God's point of view, nothing that happens in this world matters, in the sense that nothing that happens in this world can affect him in any way. But there is a vast difference between the uninvolvement that the Eastern philosophies hold up as the ideal and the Creator who isn't in fact dreaming but is causing finite beings to exist and actually interact and affect each other, and is causing finite free beings to create themselves unto their own image and likeness, and is helping them achieve the goals they set for themselves--even to the extent, if Christianity is true, of becoming one of them himself, and actually choosing to suffer in this world if the world chooses to inflict it on him. But even if the God reasoned to in this book is not the Christian one who became man, he is still anything but the "undifferentiated ground of being" that is dreaming the whole world of objects that we live in.

It is anything but surprising that a part of the world which inculcates this attitude has immense social problems; and I would think that they are insoluble to the precise extent that they are believed to be unreal and that to escape from suffering is not to try to develop out of it, but essentially to forget it by retreating into empty consciousness.

If what I have said indicates my bias and antipathy toward this philosophy, so be it. I recognize that I stand for the exact opposite of what it stands for. But I also recognize that my analysis shows (a) how such a consciousness could occur, (b) why it should appear as described by the people who have it, (c) why learning it is not learning more and more, but how to learn less and less--how to empty your mind--and (d) why it should carry the absolute and unshakeable conviction along with it that it is the only thing worth knowing and that it contains within it all truth and wisdom.

As it happens, I think there is at least a possible mystical experience which is the opposite of this one, and which is in fact absolute fullness of knowledge, and a foretaste of the Beatific Vision; but the characteristic of this latter mystical experience is that it does not lead one to inaction but leaves involvement in the world intact. And it may very well be that there are many mystics in Eastern religions who in fact have this type of mystical experience. But I will discuss this more at length in its place at the end of this section.

I do not want to leave the impression that the mysticism of empty consciousness is confined to those who practice Eastern philosophies or religions. There is a version of it in Christianity, in fact, called "acquired contemplation," a type of prayer that occurs usually after years of the kind of discursive meditation that monks commonly practice.

Meditation as practiced up until recently in Christian churches (nowadays many have gone over into the Eastern version) has been an actual thinking about some religious text or some event in the life of Jesus, or some truth of the faith. For instance, one might say over to himself the Lord's Prayer and stop at every word, trying to discover all the meaning he can in it, thus: "Our"--not just mine but everyone's, which means that I am not special, that I am a brother of everyone else who can say this--etc., etc.; and then "Father"--not Master, as in the Old Testament, and not Creator, but because of Jesus an actual parent, and a Father because the mother of Jesus was Mary a human--etc., etc. Or one might picture the Resurrection, and try to think of what it meant for Jesus, what it meant for the Apostles and Mary, what it means for me today, and so on; or one might consider death and what its implications are for a believer.

Of course, this sort of thing is conceptual and also imaginative, since one pictures what is going on when thinking of an event of the past; and the idea is to understand more about one's faith and what is behind it. I personally have found it very profitable. It also helps you to think and to notice details.

But after years of this, of course, a person tends to run out of ideas. After looking up at the cross and thinking about it and finding immense riches of things to understand about it, the meditation becomes analogous to listening to a symphony for the hundredth or thousandth time; it is all completely familiar, and, like the symphony, one lets the ideas wash over him, recognizing them, but not paying a great deal of attention.

Eventually, this becomes "the prayer of quiet." The meditator doesn't try to think about the cross any more; he just sits there in church and looks at it. As one elderly man once replied when asked what he did for hours sitting in the church, "I look at him, and he looks at me." There are no longer any contents to the act, and one is simply there, totally absorbed in the contemplation.

Now this may be the absolutely full mystical experience called "infused contemplation" that I spoke of earlier; but it can also be the kind of emptying of the mind by concentrating on just the one object and not attending to any relationships. In Christianity, this type of contemplation is not regarded as the be-all and end-all of existence, though it is considered a very good state to be in; but spiritual writers have always warned that it can lead to arrogance and that it is not necessary for holiness; it is by the "fruits" of virtue that you know whether someone is holy, not by the exalted state of his prayer life.

I might point out that there is a kind of contemplative tendency in Christianity which emphasizes "conformity to the will of God" and a kind of fatalism about things and withdrawal from involvement in the world. This, however, has generally been held to be a false view of things (not surprisingly, because the leader of Christianity was obviously much involved in the world); and even the so-called "contemplative" monks like the Trappists have usually had to work at something like farming, and have held that their duty was to be "the world at prayer." They look on their withdrawal from the world as anything but uninvolvement and indifference to the world; they do consider themselves lucky to be spared the temptations of the active life in the world, but they have a task for the world which only those who devote themselves to prayer can perform: they are praying for those who are either too busy or too blind to pray for themselves; and they act as spokesmen for the world in its loving relationship to God. After all, Thérèse of Lisieux was one of these contemplatives, and she considered herself a missionary, since she chose certain missionaries and prayed for their success; and the Catholic Church has made her the patron of missionaries, in spite of the fact that she never left her convent. This is evidence that the contemplatives think of what they are doing as a kind of work in the world.

And of course, if my view of things is correct, then withdrawal and "conformity to the will of God" in this passive sense is an abuse of human freedom; because it is choosing, as I said, to self-determine oneself in such a way that circumstances determine oneself. It is also true that, since God has no goals of his own for the world, then the world will be only what we choose it to be; and so if we remain uninvolved, we accept the world as it is, and not as the improved world it could be if we chose to do something about it.

That is, the two attitudes about the world could be expressed as the attitudes you can have about your house depending on whether you are renting it or whether you own it but the bank has a lien on it. If you are merely renting a house, you are like the person who is withdrawn from the world; if something goes wrong you call up the landlord and ask him to fix it, and put up with it until he does. If you own the house, however, you don't call up the banker when the roof leaks--or if you do, the banker might even say, "You had better get that fixed, or we may be forced to call in the loan."

Our world, just like ourselves, is ours. We can't sit back and do nothing (showing that we have no goal for it) and then ask God to fix up what's wrong with it. God, remember, has no ideals; and so the world is perfectly all right as it is as far as he is concerned, as long as we don't want it different enough to choose to do something about it. If we just keep our ideals and complain and pray in that "complaining to God" sense, then he's not going to do anything, because we obviously don't want anything done or we'd be doing it--or at least trying.

So again, the contemplation of God is not something that takes one away from this world, if my view of things is true.

In any case, there are two alternatives when you try to have a philosophical world view which includes mysticism. You can take the mystical experience at its face value and interpret ordinary experience in the light of it--in which case, what happens here in this world is illusory in one way or another--or you can fit mystical experience in as one type of human experience, and only part of it, and try to account for it in the light of the rest of experience--in which case, its claim to absolute wisdom is called into question. Obviously, I take the latter tack.

But doesn't my view suffer the same defect as that of the determinist who is trying to explain away the immediate datum of experience which is the conviction that our choices are free? Everything in the mystical experience is an "immediate datum of experience," and so its conviction of being true cannot be erroneous. But of course I am not saying that the experience itself is a false experience, only that it does not in fact report the truth about anything but itself in its nakedness. That is, just as the seeing of a black expanse is certainly the experience which it is, it does not follow that what you are looking at is an undifferentiated void (it just may be that the lights are not on); so the understanding of bare understanding is understanding, as I have stressed so often; but it does not follow that it is understanding of anything, even of itself; it is simply the experience of what it is like for understanding to be "on" without understanding anything.

This empty consciousness is not confined to mystics who have spent years practicing it; in fact, we have all had it. The very first moment of consciousness must, for a human being, be the mystical experience of empty consciousness, for the simple reason that there is as yet nothing to compare and no relation to understand, no matter how complex the sense experience might in fact be.

That is, if we suppose that the first sensation you ever had was a pain in your left foot, you couldn't have recognized it as such, because you didn't know at the time that you had a foot, let alone a left one, and you didn't know what pain was, because you had nothing to compare it with. The sensation would necessarily appear as a single whole.

But because understanding "turns on" when sensation is active, this experience would also involve understanding; and yet since there is nothing as yet to compare, then no concept can be abstracted from the sensation; and so the intellectual "dimension" of this experience is like what the cartoonist depicts when he draws an exclamation point and nothing else in the speech "balloon" above some character's head.

Actually, this is not the same type of mystical experience as the empty consciousness I was describing earlier, because that other one was the experience of nothing at all (since all relationships were deliberately ignored, to leave understanding naked), while this one potentially has a content. Hence, this experience is more the undifferentiated awareness of "being," or perhaps "existence" or "activity," and in that sense is the exact opposite of the other kind of empty consciousness.

Hegel in his Wissenschaft der Logik starts out with something like this absolutely empty awareness of being, which is identical, he says, with nothing at all; and the logic he derives from this realization is that of going out of being and coming into being, with the result that the first "in and for itself" in logic is dasein, which might be called a being--that is, not nothing. But I think he is mistaken here. There is all the difference in the world between being and nothingness; but you can't describe the difference, because the experience of being and the experience of nothing are both non-conceptual, and any attempt to point out the difference would involve using concepts. Still, this does not mean that a person who has had both wouldn't recognize the difference between the two, in spite of not being able to put it into words. Seeing undifferentiated blackness is different from seeing undifferentiated whiteness, in spite of the fact that you couldn't describe the difference if these were the only two color experiences you had.

In any case, the first moment of consciousness and this "exclamatory" awareness of being persists until some new sensation is recognized, at which point the person has his first conceptual experience, that of "different." That is, if the pain in your left foot was followed by the sensation of moving your left foot, but you didn't notice that this was not the same sensation as the previous one, then obviously you would still be in the first experience of undifferentiated being. Only when some different sensation occurred and you noticed it as different would you know (a) that you had had a new experience and (b) that it was different from the previous one--except that "new" and "previous" would not have any meaning for you as yet, because they depend on the complex concept of time or sequence. And since in order to specify how they were different, you would have to recognize in what respect they were the same and in what other respect they were different, then in this second experience, the sensations are understood as undifferentiated wholes, and all you know is that one is different from the other.

The first few intellectual experiences of everyone must, therefore, be the same; and it all begins with a mystical experience which develops into conceptual consciousness.(1)

There is another part of a normal experience that is closer to the mystic's empty consciousness than this very first experience we all have had; and this is one "dimension" of the experience of falling in love.

When you fall in love, your experience has several "dimensions" to it: first of all, of course, there is the complex emotion which is the conscious "dimension" of the sex drive; and there is also the abstract knowledge of who the person you love is. Added to this, there is the esthetic understanding of the beloved, based on similarities of the emotional impact she has on you with the emotional impact of other objects; and this gives rise to such comparisons as "My love is like a red, red rose/ That's newly sprung in June," and so on.

But there is another "dimension" that is not describable because it is mystical; it is basically the contemplative attempt to answer the question, "Why is she the one?" What is it about this person that is so special? After trying to find the characteristics that make her so attractive--and failing, because no one of them is adequate, and not even the sum of them is adequate, you rest in the "mystery" of it all, and of her, and you simply contemplate her as a marvelous individual and accept her as "being made in heaven" for you and all the rest of it. For anyone going through this experience it is all glorious, and somehow full of truth; but when you try to talk about it to others, you find that what you say bores them to sickness--especially if they have been through it themselves and the scales have fallen from their eyes.

This mystical aspect of the experience is like what I described above when talking about "the prayer of quiet," where a single object is looked at, but no attempt is made to find aspects of it which can be understood; you are simply "with" your beloved, thinking of her, but not thinking about her; and often it is enough just to be there in her presence, marveling and wondering. Again, the knowledge is intellectual knowledge, and so you "understand her"--and you seem to understand her in a deeper sense than you understood anything in your whole life, and you are convinced that you know more about her than she knows about herself, because you don't understand facts about her, but you understand "the depths" of her.

I'm sorry; but just as with other forms of empty consciousness, all you understand is the act of understanding, not anything about her. You have not "seen her very essence," you have simply turned your understanding on and been so taken with her in her uniqueness that you left it with nothing to understand. And this is why those who have been in love and got over it (it happens, in spite of the fact that this mystical "dimension" as spiritual says "forever and ever") realize that the knowledge you have is sham and delusion, however incapable they are of convincing you of this--or themselves, for that matter, if they fall in love again.


Notes

1. Note that intellectual consciousness starts out as abstract and only gradually works its way to concretion, not the other way round. In this, I think that child psychologists like Piaget are completely wrong. We first learn abstract difference, then abstract sameness (when one sensation is recognized as repeated); once these two are in hand, then understanding searches sensations for samenesses and differences, and eventually notices partial sameness (i.e. sameness within difference), and this is the beginning of observing beings like Mother moving within the visual field. Then similarities and differences among these "objects" are noted, while the baby is also discovering his own body by touching himself and watching and feeling. Once this happens, the baby is interested in classifying these "objects," and it is for this reason that children's thinking is concerned with the individual, not that they can't think abstractly. They are not at this stage interested in abstractions. Incidentally, the recognition of the self as a subject and of the objects as true objects comes rather late, and is connected with the recognition of a difference between dreaming (or imagining) and perceiving.



Chapter 3

Altered states of consciousness

I suppose the mysticism of empty consciousness could be called an "altered state of consciousness," because the experience seems all-encompassing and veridical; but what usually go by this name are types of what is mainly sensory experience, produced either by drugs or hypnosis.

The kind of thing produced by LSD, peyote, jimson weed, and other psychedelic chemicals, including to some extent marijuana, is that they raise the level of vividness of imaginary experiences to the degree that they are as vivid as--or even more vivid than--perceptions, and hence are hallucinations. The person may or may not also be aware of the world he is perceiving, with the imaginary one superimposed upon it. But the essence is that what is experienced is imaginary.

What apparently happens is that these chemicals are like the chemical "transmitters" of energy in the nerves in the brain, and when they reach the brain, they allow large bursts of energy (and hence vivid consciousness) to flow more or less randomly through the brain, like a very vivid dream. It may be that the logic of the sequence of images is not quite like that of a dream (where the energy simply follows the path of most frequent or vivid association), but is more random. In any case, they tell us really nothing about the world or about our own reality, in spite of how vivid they are.

And they are also, as I said when dealing with the real and the imaginary in Chapter 3 of Section 4 of the first part 1.4.3, very dangerous, because their vividness can burn pathways into the brain, causing the "trip" over again when presented with the proper perception for a stimulus, and thus can lead to psychosis, where the person cannot distinguish between what is real and what is imaginary.

But there is another type of altered state of consciousness, hypnosis, where the experiences one has are not random, though their sequence is not controlled by the one under hypnosis.

Just as we can be persuaded by others, particularly if they can make us empathetically feel emotions they want to produce in us, there seems to be an extreme instance of this where we allow another person to take control over our instinct, and our ears become a kind of new "input port" for his voice, which takes over what our understanding ordinarily does for us in directing our instinct. What is going on seems to be a good deal like what happens when your computer is attached to another by means of a modem, and you see things going up on your screen that are put there by the person at the other end of the phone line, and which you have no control over.

The experience seems to be very like dreaming, including the fact that it is difficult to remember afterwards what went on; and in fact the word "hypnosis" itself is a transliteration of the Greek word for "sleep." But there are several significant differences. The hypnotist has a good deal more control over the subject than the subject ordinarily has over himself; the hypnotist can tell him to make his body rigid, and it can become so rigid that the subject can have no support under him except at his neck and ankles, and even have someone sit on his stomach without collapsing. The subject can be made to feel no pain during an operation without anesthesia, and can remember things that are completely inaccessible to him in his normal state of consciousness.

Essentially, what seems to have happened is that the subject has yielded the power of concentration (the spirit's direction of the instinct) to the other person; and presumably these abnormal feats are due to the fact that in the person's normal condition, the spirit does not have this power over the instinct because the spirit is also the unifying energy of the body, and some aspects of bodily regulation are best left by it to the energy-"dimension" of the act. As yoga shows, these aspects can, with much effort, be brought under conscious control; but this is by no means necessarily beneficial. Do you really want to decide how many times a second you should breathe, or how fast your heart should beat? Far better leave this up to automatic mechanisms. But when someone else controls instinct, apparently these functions are also subject to control by the input coming in through the ears, and it becomes possible to do abnormal things.

That the control is not absolute is seen from the fact that if the hypnotist tells the subject to do something that is contrary to his moral code, then he wakes up--much as too much of a disturbance wakes a person from a dream.

Hypnotism is a kind of "possession" of one person by another. We don't think of it in terms of possession, because we recognize that more or less anyone can be hypnotized by more or less anyone else, if two conditions are fulfilled. First of all, the subject must be not unwilling to be hypnotized. That is, you can't, apparently, be hypnotized against your will, or if you resist it; but you could unwittingly be hypnotized by listening to the hypnotist without resisting him; for example, if you were a member of a class watching a demonstration, and you inadvertently became hypnotized along with the one who was the real subject. Secondly, the hypnotist has to know what he is doing, and how to get the subject in the relaxed state where he can possess the subject's instinct.

There are allegedly other types of possession, which (if they occur) seem to be a kind of hypnotism by the spirit either of a dead human being or by a pure spirit like a devil. Conceivably, what they call "good witches" are supposed to be possessed by angels; but I have never heard this stated in this way.

I do not know whether any of this happens, because, as I said when discussing the evidence for immortality in Chapter 3 of Section 4 of the third part 3.4.3, seances and such are very often, if not always, fraudulent; and either fraud or error is even more likely to be the explanation of demonic possession. Still, if they happen, then based on what is reported about them, they would seem to be like hypnosis.

It doesn't seem to me that we can rule out these communications from beyond the grave, since, though a disembodied soul can't be affected by anything that happens on earth, he can still affect the earth, if my theory is true; and so it is at least conceivable that he could possess the medium in such a way that he could communicate with those who are left behind. There would be no problem in his answering questions and so on, because, though he is not in time himself, he eternally knows everything that happens at all the times he is interested in knowing about, and so he eternally knows the question, and eternally produces the act of causality of answering it at the time when it is appropriate (just as God eternally causes me to be typing this at this moment). Hence, there is nothing theoretically against what is reported to happen in a séance.

What seems to be going on in a seance is that the medium gets himself into a state where he can be hypnotized by the spirit of some dead person; and when this happens, often his voice alters and he speaks like the dead person. And while he is in this trance, he is said to report things about some living person that the dead one would want him to know--and can give details about the living person that only the dead one and the living one are aware of.

One way to test if this is actually going on, of course, would be to find out if the medium in his trance can actually report things that (a) can be checked, and (b) that he couldn't have known, such as facts about the living person's life that the medium couldn't have found out (or guessed) for himself. But here one must be very careful, because if the person known about is present, the medium could have (consciously or not) read certain subtle clues or made a lucky guess--or even read the other person's mind. I would suspect that the way to do it would be not to have the person known about present (nor to have the experimenter know the facts to be revealed), and then afterwards check to see how much of what is reported actually reflects what the alleged spirit would know of the absent person's life. Whether, of course, the dead spirit could be "called" under these conditions is questionable--which would make the testing procedure that much more tricky, but at the same time make me, at least, that much more suspicious about the whole thing.

What is normally called "possession" is the hypnotism of the person by a pure spirit, such as a devil or (as I said) perhaps an angel.

It is said to be a sign of possession by an evil spirit that the possessed person speaks in a language he never learned. This, I would think, is pretty good evidence of possession by some spirit; but of course since other human beings know languages, then it could be possession by some dead human soul. In order to establish possession by a superhuman spirit, one would have to prove that the possessed person had knowledge that no human being, even after death, could have; but it is hard to see what this could be. Not knowledge of the future, because if my theory is true, a human soul would know all about all the times he was interested in knowing about; and the same would apply to a knowledge of events occurring in far-off places. I don't think that actions beyond ordinary human powers would necessarily prove that it was a devil possessing the person either, even such amazing things as psychokinesis.

In fact, I can't think of any kind of thing a possessed person would do that would rule out the possibility that either he was in some kind of a self-hypnotic condition or that some other human being, alive or dead, had control over him.

For those who are concerned about demonic possession, the lesson from hypnotism should be instructive. If there is such a thing, and if it is at all like hypnotism, then you couldn't be possessed against your will, though you might be so if you foolishly left yourself open to it, like those who inadvertently let themselves be hypnotized. This also is what religious writers who have talked about such things say. So there really is nothing to worry about.



Chapter 4

Absolute consciousness

The other type of "altered state of consciousness" is another kind of mystical experience: a non-conceptual intellectual awareness which is to empty consciousness what seeing undifferentiated whiteness (mixture of all colors of light) is to seeing blackness (nothing to see). In one sense, it could be said to be understanding existence in its infinite fullness, just as empty consciousness is a kind of "contact" with nothingness.

Those who claim to have had this experience call it "infused contemplation," to distinguish it from "acquired contemplation," which they also generally seem to have had, and which seems to be the sort of thing I described as empty consciousness. Not surprisingly, those who talk about it do so in a religious context, since for them it is (as it would have to be, if our theory is true) direct knowledge of God himself, where God acts directly on the intellect and is known intuitively and not by means of concepts, as if the intellect "saw" him. They claim that this is not something that can be got by practice, because it is totally beyond human power and is therefore a free gift of God which no one can claim in any sense to deserve. Interestingly, they also tend to say that it is not necessarily something which one ought to petition God for, because it can bring with it the notion that because one has it one is specially favored by God (one is, of course), and this, for the wrong sort of person, can lead to thinking highly of oneself.

First of all, is it possible? I indicated in somewhere in Section 4 of the third part why I think it is. The human spirit has to finitize itself to understand one definite concept; but this implies that it is in itself beyond the concept which it limits itself to understand; and since the concept can be any concept whatever, including such general ones as being, existence, or nothingness, then it follows that the human spirit is in itself beyond any limited concept which it understands. The finiteness of the human spirit consists in the fact that it can't understand unless it finitizes itself in some way.

As I argued when discussing the Beatific Vision, what apparently God does is raise the human spirit above its necessity to finitize itself and helps it think absolutely, without any restriction on its thinking; and this infinite thinking is, of course, also God himself, because that is what God is, and God cannot be differentiated in his reality. What I am asserting here of this type of mystical experience is that it can occur in this life and not wait for the life after death.

Hence, in this kind of mystical experience, God does not exactly "show himself" to the person, he enables the person to become him intellectually while remaining (in the rest of his "reduplications" of his consciousness) the finite spirit and soul which he is, uniting the parts of this particular body. And this would have to be the case, if the finite person were actually to know God. John says in one of his letters, "We will be like him, because we will see him as he is." This is not quite accurate, because in God there are no parts (though presumably there are "reduplications" of the infinite Act); and so if what my theory implies is correct, it is more accurate to say, "We will be him, because we will see him as he is."(1)

There could be no distinction of subject and object if we understood the Infinite, because subject and object, as distinct entities, would vanish in the identity of absolute existence.

But this is not quite true either. There would be a subject/object distinction with respect to the other "reduplications" of the act, but not in the "reduplication" which actually understood God as he is; and so while in one "dimension" of himself, the finite person has been absorbed into God and become God--not a part of God, as the pantheists hold, but God--in the other "dimensions" of that same consciousness, he is still the finite self he always was, and even is still, if the writers on the subject are correct, capable of sinning.

There are those Scholastics who hold that people who have the Beatific Vision are not capable of sinning; and so they tend to say that this type of mystical experience is not really the same kind of knowledge as the Beatific Vision. They give two reasons for this: first, that some people who have had it have apparently lapsed into sin, and secondly that the Beatific Vision would necessarily produce absolute bliss, and these mystics are still quite capable of suffering.

As to the second point, if there ever was any human being who had the Beatific Vision while he was living on this earth, it was Jesus; and he certainly suffered; so this is no argument that the mystical experience of absolute consciousness is not the same as the Beatific Vision. As to the first point, it only follows that those who have the Beatific Vision cannot (in practice, because their wills are still free) sin if you assume that the will by its nature desires "the good," and since it is based on understanding, "the good" without qualification, or the infinite good. On this showing, once having possessed the infinite good (which, of course, is God), then the will could not desire anything else, because it already has all that it could desire.

I discussed the fallacy of this argument in Chapter 10 of Section 5 of the first part 1.5.10, when I gave my view of why goodness (the ideal) is subjective, not objective; and I also discussed the fallacy in the "automatic" attraction toward "the good" in commenting on the Scholastic position when discussing choice in Chapter 6 of Section 3 of the third part 3.3.6. It does not follow that if a person possessed God, he could desire no more, even if he possessed God with the infinite act of possessing, because, as I see it, the Scholastic theory about the objectivity of "the good" is not true--and if it isn't, then it is quite possible for one in this life who has the Beatific Vision to desire something perfectly incongruous with it. "The good" is not something objective, automatically sought by the will; it is subjectively created by the human spirit.

St. John of the Cross mentions that in prayer, he as a mystic was plagued with sexual temptations-- which certainly were at least an attraction away from God, or they wouldn't have been temptations, and would have been images that would be completely ignored as silly or trivial.

One couldn't sin after death if one had the Beatific Vision, of course, not because the will is no longer free, nor because God so absorbs it as to make it in practice incapable of choosing anything else, but simply because a pure spirit can't change; and so once possessing the Beatific Vision after death, there is no way it can be lost, because the free choice for it is complete and eternal, the way the angels' free choice is eternal.

In any case, my view would make it quite possible that there is no distinction between this kind of mystical consciousness and the Beatific Vision, except that the person with absolute consciousness in this life is also in the other "dimensions" of his reality a changing being.

This type of mystical experience, then, would be utterly different from that of empty consciousness, not only in that it is absolute knowledge as opposed to absolutely minimal knowledge, but in that empty consciousness involves unawareness of everything else while in that state (because if distinctions occur, it is lost), while this type of mysticism can (and does, if we are to believe the writings and lives of those who report having it) exist together with all sorts of other experiences and activities in this world. It is "there," in the background, not taking over the whole of life, but permeating it, as it were, just as the experience of space permeates all of visual experience while remaining only one "dimension" of it.

What it seems to be in its final stage (what St. Teresa of Avila called the "spiritual marriage") is a kind of non-conceptual knowledge of the truth, which enables a person to "see through" falsehood and recognize conceptual truth when he encounters it. I mentioned this in passing when introducing Section 4 of the first part. It is possible that Socrates had it, for instance; and it might be what he was referring to as his daimon, his "guardian angel" who warned him when he was going to do something wrong. As I said in discussing empty consciousness, it is also quite possible that some Eastern mystics have been given this absolutely full consciousness; there is no law that says God can only give his gifts to Catholics or even Christians, much as some Catholics or Christians would like to think so.(2)

But whoever has it, it seems to be a kind of intellectual "taste" for the truth, because what one learns conceptually he compares with the Absolute Truth which he knows by being absorbed in it; and he can recognize incongruities and compatibilities when he encounters them.(3)

But beyond that, the experience seems to be completely ineffable, and, like empty consciousness, is described in words only by uttering paradoxes like "the light that is so light that it is darkness" and so on--in terms not unlike those of empty consciousness, which is not surprising since both are non-conceptual (but of course the paradoxical "Everything is nothing" isn't there).

Obviously, if my view of what is going on here is true, this type of mystical experience is only possible with God as its "object" (and since it is beyond human nature, with God as the one who bestows it); and this is at least consistent with what the writers on the subject say. They say that the only thing any finite spirit, such as the devil, can take possession of is the sense faculty, and only God can directly act on the intellect or the spiritual aspect of the human being.

As to the genesis of this kind of experience, it seems that it only happens to a person who is reasonably far along in prayer and devotion to God. I would suspect that there has to be a desire to let go control of one's own life (a control which is perfectly legitimate) and let God to work in oneself, taking over one's life.

A few words must be said here, for two reasons: First, because this sounds like the Buddhist uninvolvement that I castigated, and second, letting God (or the authority in the monastery) take over the management of one's life seems to be an abdication of freedom.

As to the first point, letting go of control over what you do by putting yourself under authority in everything precisely leaves you open to involvement, if the authority tells you to do something. It is pretty hard to withdraw into a shell and merely contemplate if you are assigned to run a soup kitchen.

As to the second point, the abdication of one's freedom, it is true that monks and nuns take a vow of obedience letting the one in authority in the monastery or convent dictate even the smallest detail of their life, and willingly doing what that person even hints that they should do. And of course, it certainly looks as if letting God take over control is an attempt to dragoon God into making the decisions for oneself, and so taking responsibility for one's acts. If that were what it was, it would be a supreme example of Sartre's "bad faith."

But it isn't like that at all; and I speak with personal knowledge here, having been a monk for eight wonderful years, until I was, to my surprise, called away. The monk is still totally responsible for his choices; he just chooses (as a sacrifice of self-centeredness) to go along with whatever he is ordered to do or whatever the "superior" (whom he recognizes, of course, as only superior in status) suggests--as long as it is not morally wrong--because what he does does not matter to himself since he himself does not matter to himself.

But this sacrifice of one's own control has behind it letting God take over one's life. Because the promise of obedience is a vow before God, it is made with the understanding that (a) this is done to show how much more important God is in one's life than one's own interests, and (b) that what the superior says will be what God wants one to be doing at that moment--always supposing (since the superior is finite and fallible, and can even be perverse and sinful) that what the superior says does not contradict some command of God.

Behind this is the knowledge that the person does not really know himself, and consequently does not know what he would enjoy or what he "really wants"; but he knows that God knows this. And since God wants nothing but the person's happiness, then he gets into a situation where he agrees out of love of God to give control of his life to another fallible human being, with the hope that (a) God will accept this as a loving sacrifice, and (b) God will make things work out so that he wouldn't by taking control of his life have been able to make it a happier or more fulfilled one. A hundred times as much in this life and eternal life to boot.

As to the letting go and letting God have control of one's life, this is not something capricious, since the potential mystic knows that God is not the enemy of understanding and reason, but their companion. Hence, the tacit agreement in letting God take over is that one will do in whatever situation what seems the objectively more reasonable thing to do, trusting in God to make what seems the more reasonable thing be the thing that can bring about the greatest happiness of the greatest number (always letting "happiness" be defined subjectively, of course). So the person who lets God take control over his life is not actually doing anything but what a reasonable person who was trying to control his life would do: act in the most reasonable way.(4)

This is done, however, in the knowing-unknowing of faith, because in this life we are not only not really aware of what we would most enjoy, we are also not aware of others' happiness (because we can't know what their goals really are, for one thing, unless they tell us--and even then they do so haltingly and inadequately); and so we can't know whether the act actually did the good we intended or whether it wrought perhaps some tremendous damage we didn't intend--and whether this damage might or might not have been the best thing for the person we hoped to help.

So a great deal of humility is contained in this willingness to let go and let God take over control of our acts; because we never do lose control, and we are still responsible for everything we do; and it is only in faith that what seems the reasonable thing is known to be the thing that gets us to the goals we actually have, which is the greatest happiness (including the greatest freedom) of the greatest number.

The incipient mystic knows that at any moment he can take back control of his life, set definite goals for himself, and make certain things important, even vitally so. But he chooses to act consistently with what he knows the facts to be: that there is nothing objectively important, least of all himself; and he wishes to be absolutely honest with himself and not to matter to himself at all. St. John of the Cross was given the name Doctor Nada--"Doctor Nothing"--by his contemporaries, because he kept insisting that what one should want for oneself was nothing at all.(5)

Needless to say, this non-evaluative mode of existence, where nothing at all is evaluated, not even oneself, is exceedingly difficult to attain; but it seems (at least from my perspective) to be the goal the Christian is aiming at, because it is only in this way that one can love as God loves.(6)

Wishing to love as God loves, the incipient mystic then prays to consider himself as of no importance whatsoever, and asks God to take away from him anything that he likes for his own fulfillment--and to replace it with whatever God sees as what he should have or do; it is only by rejecting everything as "mine" can a person be totally self-forgetful and be able to love as God loves.

And if a person sincerely makes this prayer, God will answer it, and gradually--insofar as he can stand it--take away from the person everything he thought would make him happy; and in such a way that the person generally has the opportunity to hold onto it, while at the same time it seems more reasonable to give it up.

To take one example, I said that my years in the seminary were wonderful years, and they were. But I am a creative kind of person, a maverick thinker, really unsuited to life in the Jesuits, where "conformity of mind to what the superior wishes" is the prime virtue. Someone like me should take the initiative in what he does, because no one else will think up the crazy ideas and projects he comes up with--and so waiting for the impulse to come from the superior is not really consistent with my nature. But I didn't realize this, because (then at least) I was also very docile and had no problem with obedience.

But it happened that I was told by my superiors to undertake a task (teaching high school) which was, because of my peculiar makeup, supremely repugnant to me; I used to wake up retching every morning as I faced another day. After several months of real agony, it finally dawned on me that just because something was hard, this was no sign that it was the will of God; and I was told by my superior to consider my life and my vocation during the customary retreat at the end of the year. I did so, and found out something like what I described in the paragraph above; and when I added up the reasons for staying a monk and leaving, the reasons seemed all on the side of leaving. I was apprehensive at going back into the world after eight years away from it; and I wrote to the superior saying that it seemed reasonable for me to leave, but that if he even hinted that he thought I should stay, I would be only too happy to do so. He told me to leave. And now I am a husband, the father of two wonderful children, a philosopher of the sort I probably could not otherwise have been, an actor, and a thousand other things. I gave up the one thing I was sure would never be taken from me; and in return I have been given all that I gave up when I entered the seminary, and how much more besides only God knows.

This giving up of self-interest is, I think, not simply the task of a mystic, but of every Christian, who if he is to love as God loves, must abandon all ideals and face the world and himself with complete realism. But of course those who are given the gift of God's own consciousness in this life are apt to be those most serious in pursuing this goal of self-abandonment to the limit.

I realize that in our present age of "fulfillment," this abandonment of self-interest sounds perverse and even immoral and inhuman, so I want to stress that it is actually conformity to the objective reality, in which "self-esteem" is seen for what it is: a lie and a cheat to make it possible for people to get through life. This view sounds on the face of it absurd, because we are so trained to evaluate and think in terms of importance (particularly self-importance); but in fact we have no importance, and if we want to be honest with ourselves, we should admit it.

And so the stages in mysticism reflect this. The Master is not cruel, and so he gently leads the soul along. At the beginning of one's commitment to this enterprise of loving God, prayer is apt to be filled with rapturous and exultant emotions of a kind of sexless love of God: what St. Ignatius called "sensible consolation." The neophyte is convinced that God is near, God loves him, and that God is leading him toward bliss--indeed, he seems to have found it already, and if this is what this life is, what must heaven be like?

This emotion is a gift, to be sure, but it has nothing to do with the actual union of God and the spirit, because (obviously) it is an act of instinct, which has an energy-"dimension," and is hardly a foretaste of heaven. The mystical experience has absolutely no emotion connected with it at all. In fact, since it is even beyond concepts, it is in a sense not recognizable in any definite way by the person who has it; because he can't feel it, and he can't understand it in any ordinary sense of understanding at all. The purpose of the "sensible consolation" is just that; to give consolation and encouragement to the soul on its very arduous journey. It occurs also at intervals during later stages of development, of course; but the intervals become rarer and rarer as the soul becomes stronger and less self-interested.

It is this radiant joy at "confronting God" that is what I think certain Protestant sects are talking about when they refer to "conversion," and the conviction of being saved when one "accepts Jesus as one's Savior." It certainly has the power to change the direction of a person's life; but (a) it is unsustainable, and (b) it is unspiritual, simply because it is an emotion, however connected with the spiritual it might be. The spiritual life is simply not an emotional life; and (as we will see shortly) in the higher a person goes in the mystical life, the more the emotions are apt to be negative ones, and the less emotional satisfaction one gets from things divine. This is not surprising, if the mystical life is an advance in conformity to the truth.

At any rate, after this initial "honeymoon" stage, as God takes over one's spiritual consciousness and one begins to give up control of one's own intellect, the attention wanders, and all sorts of sensations occur as distractions in prayer. These distress the developing mystic greatly, because meditation used to be so easy, and concentration on the text at hand or the event in Jesus' life so enjoyable. Now it is next to impossible to tie the imagination down. And of course, since the expansion of the intellect toward the Beatific Vision is beyond sensation and beyond concepts, the mystic does not feel what is happening, and does not understand what is happening--and though he is aware at a very deep level that the "right" thing is happening and he loves God even more than before, the apparent distractions caused by the release of his instinct make him think that he is slipping back and abandoning God for the "pleasures of the world." But of course, concentration and sensation are only needed for conceptual thought, not this direct intellectual vision; and so it is not surprising that concentrating on some holy image would be what is really the distraction.

Of course, after several years of prayer, it is also possible that those who are not constantly striving for greater and greater love will also find that their prayer becomes more and more distracted and a greater waste of time. But their attitude is rather one of abandoning it and what it stands for as pious dreams of neurotics, and returning to the "real world"; while the mystic knows that this is false, and desperately does not want to do it. Still, he sees himself as indistinguishable from his worldly colleagues, and who is he to say that he is different or better than they are?

What the mystic has to do, actually, is not pay attention to these distractions, to let them happen without letting them worry him, and simply let God do the work. There is absolutely nothing he can do to advance in this project of his except let go, and not even make it a project of his, but of God's, for whom a thousand years are as an evening gone. He can't hurry things; what will happen will happen in its own way, in God's own time.

The novice mystic is also concerned about letting go and giving up control over himself because he realizes that God is not the only spirit that can affect him, and he is afraid that he might fall into sin. This is always a danger, of course; but there is no guarantee that choosing to live the spiritual life is safe. And, of course, in abandoning oneself to God, one does so in faith that God is not going to allow the devil to have the upper hand--at least permanently. There will be lapses; but "be brave," as Jesus told his students as he entered the garden of Gethsemani, "I have won the battle with the world."

This fear is enhanced by the fact that as the mystical consciousness develops, ideas more or less "just come" to the person, and he surprises himself with knowing more than he thought he knew. What he needs is "just there" when he needs it, though he can't necessarily call up knowledge at will; and it is only after giving thought to his insights that he can see that they are logical and sensible. This is a frightening experience, because it does seem as if his mind is being taken over by someone beyond him, over whom he has no power--even if all that has happened is that he now has greater access to the right side of his brain, where non-discursive (non-"logical") connections seem to be being made. It is one thing to choose to abandon oneself; it is another to experience the loss of oneself. But of course, it isn't a loss; it's a partnership.

The mystical writers call this the "night of the senses," and it lasts often for many years, until the distractions in prayer no longer bother the mystic, and he realizes that somehow or other he is praying, and he would not give it up even if the chance were offered. It is simultaneously a kind of relief and peace (emotions, you see, are still involved) and turmoil and torment. God grant him at this stage a spiritual director who knows what is going on and will encourage him, but not give him advice on how to rid himself of distractions, or how to think logically and not listen to "inspirations."

As to this last, St. Ignatius talks about "discernment of spirits," to find out whether these inspirations come from the Holy Spirit, or are temptations--either from the devil or one's own heated brain. What he says is that, if a person is trying to be honest and do what is best, then the Holy Spirit will not have to fight to put ideas into his head, and so these inspirations will be accompanied by peace. Inspirations which bring with them emotional excitement, even if the emotions seem quite positive, are apt to be temptations, because for the devil to get the person to do what he wants, he has to storm the person's mind. Of course, for those who don't care about honesty and God, it works the other way; new ways of advancing by cheating are received peacefully, and thoughts of straightening out one's life bring emotional turmoil (like the emotions accompanying "conversion" that I mentioned).

But the real sign that inspirations are from the Holy Spirit is obedience. "You are to have the same attitude that was in Prince Jesus," says Paul, "who, when he was in the form of God did not think being equal to God something he had to keep hold of; he emptied himself and took on the form of a slave, and became the same thing as a human being. And when he found himself in human shape he lowered himself so far as to submit obediently to death, and death on a cross."

For the monk, this means that what the superior wants is the objective touchstone of whether his inspirations are those of the Holy Spirit or not. The Holy Spirit does not contradict himself; and if you are inspired to do something and, let us say, hide it from the superior because you think he might not approve, then that inspiration is not from above, however good the idea might be in itself.

For the layman, who has not handed over the initiative for his acts, informing those in authority about what he is doing is out of place. What this obedience means for a layman is that it is not an inspiration of the Holy Spirit when he is prompted to do what contradicts anything any legitimate authority commands--whether that authority is in his business, civil authority, or that of the clergy. It is only when the commands by any of these authorities is known to be positively immoral that disobedience can be prompted by the Holy Spirit.(7) And, of course, if a legitimate authority puts a stop to a project in any stage of its development, then the Holy Spirit is not going to be inspiring a battle to keep the project going.(8)

To continue with the development of the mystical life, sometimes--that is, with some, but by no means all, mystics--after this stage of the "night of the senses" has passed, God seizes the whole mind, sense, spirit and all, and the mystic is lifted into ecstasy, where he loses contact with his surroundings and knows that he is in contact with God, in a way that cannot be expressed in words. St. Ignatius spoke of intellectual visions of the Trinity while in these ecstatic trances.

These ecstatic states are often accompanied by what one might call "psychedelic experiences": the "visions" of Jesus or his mother or the saints that are the stuff of folklore. Either God or perhaps the saint in question takes over the imagination and produces the image, often with a vividness beyond that of ordinary perception, as happens also with psychedelic chemicals.

This can happen also with people who aren't particularly saintly, as long as they are well-disposed, and are useful for transmitting some message to others, as with Bernadette at Lourdes and the children at Fatima. Not surprisingly, these people tend to become contemplatives afterwards.

Apparently, this seizing of the instinct (which is what is going on) can be so strong as to produce extraordinary physical changes in the body; and there are people who acquire the five wounds of Jesus, which bleed and cause pain, but do not do harm to the body or become infected, though they remain open. I would suspect that this is a kind of extreme case of what hypnotists can do with the body when they take over the instinct. From what I have heard, the wounds in the hands are actually in the hands, where traditionally people think the wounds of Jesus to have been, when they were almost certainly through the wrist, or the weight of the body would have torn them off the cross in a matter of seconds.(9)

This seems to me to show that this gift of what they call the stigmata (wounds) is analogous to the "sensible consolation" earlier; and is something that occurs in the body consistently with the way one thinks of Jesus, rather than a kind of "wounding" by Jesus of the mystic's body, if I may so speak.(10)

There are also legends about monks in mystic ecstasy floating up into the air. They say that Joseph of Cupertino (now the patron of aviators--who says the Catholic Church has no sense of humor?) used to do this whenever he heard the word "God," and was ultimately relieved of his duty of serving dinner, because during the reading at table, when "God" was spoken, he would rise up into the air, tray in hand, bump his head on the ceiling and spill the contents of the tray onto the brothers below.

I don't know whether these legends have any factual basis; but if they do, I suspect that what is happening is that the ecstasy takes the concentration to such an extreme that some of the energy that would ordinarily be expressed as mass is used in it, and the person's body becomes less dense than air during this time--and like a helium balloon, he floats.

These manifestations are interesting to outsiders, and can be useful to leading people to faith, I suppose. But, as the spiritual writers on the subject attest, they are not essential to progress in the mystical life, and can in fact be detrimental, by making the person interested in himself and his sensory experiences or corporeal feats, and giving him an exalted opinion of himself--which is, of course, just the opposite of the goal of the whole enterprise.

After the "night of the senses" or this ecstatic stage come years and years of "aridity," called the "night of the soul" or the "night of faith." In this stage, the person is not really interested in much of anything but God, and wishes sincerely--and steadfastly--only to be his slave. He may, by the way, be engaged in all sorts of external activities; but these have no importance to him except that they are his service to God, and would be abandoned at a moment's notice if he thought that God wanted him to give them up. Think of what this means. Could I, for instance, leave this book now, unfinished, and begin some totally different career? Or even see what I have done torn up, after some thousand pages?

This is rightly the "night of faith," because it is the point in which the mystic is to live by pure faith that what he believes is true, and not by any experiential certitude. He has asked to have everything taken away, so that he can live purely for God and not for himself at all; and the last thing to be taken away, oddly enough, is psychological contact with God.

It is often the case also that mystics are misunderstood and their motives questioned, so that they have to give up having people think well of them. This is not something to be sought after; Jesus, for instance, though he was ultimately held in contempt by practically everyone (possibly even most of his own students, "who had hoped" that he was the Messiah, and whose hopes were dashed when he didn't come down off the cross), never did anything to bring it on himself, and always tried to be polite to people consistently with not contradicting the truth of what he was.

So those who try to have people despise them are not really imitating their Master; but it is a fact that if they imitate their Master, they will be regarded as hypocrites and have to put up with being thought of as evil by perfectly sincere people, even those they love most dearly, without having any defense at all against this. Reputation is a very, very hard thing to give up, especially by those of us who have not been perfect in our actions, and have to say that we have done things to deserve the opinion people have of us--though perhaps not to the degree that they have it.

The point is, of course, that such things are to be a matter of indifference to the mystic; God is taking away from him all that promotes self-interest; and after years in disgrace, it simply does not matter to the person any more.

And then God takes away himself. That is, the union of God and the intellectual aspect of the soul becomes more and more purified, and so the awareness of contact with God becomes less and less apparent, because the soul is God in this "dimension" of itself, and does not possess him as a beloved object any more.

Prayer becomes even more of a torture, particularly communal prayer like the Mass. The mystic, if a Catholic, still goes to Mass, because after all it is the participation in the crucifixion, and has meaning irrespective of what one "gets out of it"--one is bringing the crucifixion with its blessings into the present age, and to join in that work, however, painful and, yes, repugnant, is enough. It is interesting and appropriate that the crucifixion, which the mystic used to attend with such peace and joy, should now be a kind of psychological crucifixion itself.

But the worst of this stage is that the person doesn't really believe that he believes. He is constantly beset with thoughts like, "You can't think that this actually happened! That he got up out of the grave and walked around, playing jokes on people! Why all this rigmarole to test 'faith' when he knows whether people are sincere? It's all wishful thinking, and you know it!"

At the same time, at the depths of his being, he knows in a totally non-conceptual way that this is false, that the Resurrection really did happen, that "wishful thinking" is simply reason asserting that the world is not absurd, and that in order for the world not to be absurd these "legends" have to be true. And he knows the evidence that what Scripture says is basically factual, and can refute the views of interpreters who interpret it away into "meaningfulness"--where, as Paul says, it becomes meaningless and a fraud. But all this is abstract, nothing but theory; and it is so easy, so terribly easy, for theories to be wrong. And so, his discursive mind is anything but convinced by his reasonings, because if there is a God, why has God abandoned him?

He cannot even be left the one thing that matters to him: his relation to God. Because in fact he is not related to God any more, he has become divine; and this, which is his goal, he now sees with his conceptual mind as the ultimate loss. He must learn that he has arrived, and that this state is the state he was looking for; and once he learns that, peace can return.

One of the other things that it is hard for the mystic to learn doesn't matter is his faults and sins. He is trying so hard, he hopes, to be honest and do what is right; but he knows that he isn't really trying very hard, and that everything he does is shot through with hypocrisy. But of course, in looking on himself this way, he is setting standards for himself, and realizing that he isn't living up to them. "I have finally become resigned," writes St. Thérèse somewhere, "to being imperfect." God doesn't care about your sins; why should you? God accepts you absolutely for what you are; why shouldn't you?

That is, ultimately for the mystic nothing at all is to matter; there is to be no motivation for doing anything. When confronted with going on with his life or giving up, he constantly thinks, "Why do I bother? It's all an exercise in futility." and the only answer is to be "Why not?" He sees no realistic hope that he will make a difference; he is nobody, he does the very opposite of what he wants to do; nobody is interested in anything he has to offer, and what he does seems always to backfire--and so why not quit? And there is no answer to this except the realization, which comes from his union in that hidden "dimension" of his consciousness, that he can't quit, that there is no real question of quitting, that he just has to go on--for no reason that is convincing.

You do what you do because you do it. And this of course is exactly why God does what he does. You provide opportunities for the world, knowing that for the most part the world is not interested in the opportunity because it has its own axe to grind, and that most people won't take it, and will even resent your "interference," and, like pigs, trample on your pearls and then tear you apart.

True, it is possible to reason that you are doing the right thing and that it is probably having an effect on others that is positive--and sometimes you get told this. But all of this is abstract, and it doesn't mean anything any more. There isn't a God; you're just acting as if there was one; you're just theorizing in empty air, because life is so horrible that staying alive for one second without this belief would be impossible--which proves that it is simply a way to get through life, a rationalization, and you're not really being honest, you are the hypocrite everyone thinks you to be. And, of course, you see your own actions and their multiple motivations so clearly, and you find hypocrisy in every aspect of your life. Who are you to be expecting God to do great things through you? Who are you to call yourself a mystic, who is actually living God's life of infinite bliss here and now? How absurd and stupid! Wake up and live!

And you can't, that's all. You have to hang on.

For years and years and years and years.

If it sounds as if I have described a classic case of depression, it would be bound to sound that way to anyone with a little psychological training. But in fact, it is very different from depression, because all of this is in the discursive mind, and the mystic knows that he is on the right track, that the belief that he thinks is false is actually true; and at the very depths of his soul, he is at peace and even happy. If you look at what mystics who have gone through this have done while they are in this state, you find that they are not like depressed people at all; they are very active, and often even seem to be quite cheerful. St. Thérèse of Lisieux related in her autobiography how toward the end of her life that the novices she had charge over thought she was happy and full of faith, and how she was in constant agony and tormented by doubts. "But I am not alone," said Jesus after predicting that in a few minutes all his students would scatter. "The Father is with me."

There is a serious danger here, particularly if this time is very protracted, that the mystic will follow advice and seek psychological "help," and lose all that he has gained. The goal of psychological treatment, after all, is to bring oneself back into control--and the mystic is trying to lose control. It is to give a person a sense of self-worth, and the mystic has been trying to lose all sense of self-worth, and has finally succeeded. It is to make the person come to grips with his feelings, and for the mystic, feelings (even these depressed feelings) are to be completely irrelevant.

Psychological treatment of a mystic in the night of the soul is, therefore, the exact opposite of what he needs. What he needs is reassurance that in fact he has arrived, and the worthlessness of everything is simply the negative side of what he wanted: to love as God loves. For the depressed person, it matters that nothing matters; for the mystic, it does not matter that nothing matters-- and like the difference between empty consciousness and this consciousness, the difference is all the difference in the world. The mystic is the only totally free person; the mystic is the only person who can face reality and himself absolutely realistically, because he has no more values.

And when this happens, then the final stage of "spiritual marriage" occurs, a time of peace and contentment and a happiness that is different from any other kind of happiness (not greater; qualitatively different). It does not mean the end of trouble and controversy in the world, still less a withdrawal into a non-involved eremetical state (though this is a temptation of mystics). It is an involvement that no longer contains any worry, and goal-orientation that no longer is concerned about success. One does what one does because one does it; and this is enough. One still does the reasonable thing, because why do what is unreasonable when nothing makes any difference? And so one does what seems to be calculated to be for the greatest happiness of the greatest number--but if the actions don't bring this about, this doesn't matter. It is involvement, but coupled with absolute acceptance of absolutely everything.

I should point out that this stage--which is very close, if my view of life is correct, of the attitude of those in heaven toward the world--is not reached by all mystics; many die while still in the night of the soul. St. Thérèse did, for instance.

At any rate, that is a psychological sketch of what I think the mystical experience is, in the light of the philosophical view of human life I developed in the third part.


Notes

1. Forgive the grammar. I want to point out that John also indicates this identity when in his Report he has Jesus say, "I pray for them to be one thing (i.e. one and the same thing), just as I am one thing in you and you are one thing in me; I pray for them to be one thing in us."

2. I hasten to say that it has never been the official position of the Catholic Church that God bestows his grace (his gifts) only upon overt members. The Church does teach that, in one sense, everyone who believes in God at all is an implicit member, because if he knew that God wanted him to join the Catholic Church, he would. It is thus that the Church reconciles "there is no salvation outside the Church" with its assertion that anyone of good faith will be saved. He is saved through the Church (i.e. the body of Christ) whether he knows it or not.

3. I think, as I also said in Chapter 1 of Section 4 of the first part 1.4.1, that this is what Jesus' divine consciousness was like. As human he had to learn facts, but as God he had absolute truth, and so he recognized things Peter's formulation of him as "the Son of the Living God" as the correct formulation.

4. Note that I don't have any problem with the Utilitarians' "greatest happiness of the greatest number" as a noble goal to be recommended for people. My quarrel with them is, first, that I don't think "happiness" can be defined by a calculus of emotional satisfactions minus dissatisfactions; and secondly, that I think that this sort of thing cannot be the basis of the moral imperative, or even what defines moral virtue.

5. To those who object, "Well, maybe I'm not any more important than anyone else, but I'm certainly not less important," my answer is, "By whose standards?" If there really are objective standards and you have ever sinned, then you are objectively worse than any cockroach, who faithfully does everything the Master ever wanted of him. Who are you to say your offense against an infinite God was insignificant? But if importance is subjective, then it isn't that you aren't less important than others; it is that objectively speaking you have no importance whatsoever.

6. Strictly speaking, the mystic has Jesus as his goal, the human expression of God's love in the world. Hence, the mystic not only has as his goal loving as God loves but loving as humans love also; so that a mystic cannot but be affected by others' suffering, even though God's love as God's is not affected by it.

7. In this connection, it is instructive that Jesus's crucifixion was due, according to the accounts, to his meticulous obedience to legitimate authority, both religious and civil. He had a right not to answer questions in his trial and so incriminate himself, and so did not answer any of his accusers. But when Caiphas, the High Priest, asked him directly, "Are you the Prince, the Son of the Living God," he answered, and in such a way that there would be no ambiguity, making what would be for any mere human being a blasphemous statement which everyone heard. Also, when Pilate asked him if he was a king, he said that he was, but that his kingdom was not in this world--which, arguably, allowed Pilate to think of him as a madman.

8. Note that the incompetence or lack of virtue (or "leadership") of the one in authority is irrelevant. If the authority says not to do it, the Holy Spirit does not say that it should be done at this time and in this way. Rev. John Courtney Murray is now cited as a kind of martyr to the recalcitrant obtuseness of Rome, when his Theological view of Church-state relations was ordered not to be taught (and which he stopped teaching). His view is basically the one which later was adopted at the Second Vatican Council as the Church's position on the matter. What of the Holy Spirit here? My view is that the Holy Spirit did not want that view taught at that time, for whatever reason; possibly because in that context it would be misunderstood and do more harm than good. Obviously, I think that Charles Curran in his continuing to uphold contraception in the face of the authorities' ordering him to stop teaching it is doing the exact opposite of what should be being done, irrespective of the truth of his position (which happens to be false, as we will see).

As to those who would cite the Representatives' (Apostles') statement in Acts at their trial before the Sanhedrin, "We have to obey God rather than you," (a) they knew of their own knowledge that Jesus did come back to life and that he, as God, wanted them to spread the message, and so (b) they knew that this command was a direct violation of a command from God. In cases like Father Murray's or Father Curran's, it is by no means obvious that the inspiration to teach these views is a direct command from God, however strong the internal conviction is that God wants it done. It is precisely because "the devil can masquerade as an angel of light" and produce very strong convictions that external authority is the touchstone of God's will.

9. For those who, like me, think that the image on the Shroud of Turin has too many anomalies about it to be explainable as anything else but the shroud of someone actually crucified as Jesus was, and are inclined to explain away the carbon dating as possibly reflecting an irradiation of the shroud by whatever produced the scorch that forms the image (or possibly the results of the fire the shroud was subjected to), then it is significant that the wounds are in the wrists, not the hands. I would not go to the stake that this is the shroud of Jesus, by any means, because it could also be the shroud of someone who was crucified as a mocking imitation of Jesus--but still, it is something he would do, I think.

10. If I were ever given them, would mine be in my wrists? The fact that I have this curiosity probably would be enough to indicate that I would not be properly disposed to have them.

Section 2

Formal Logic


Chapter 1

The different kinds of logic

The types of mysticism mentioned in the previous section are the kinds of non-conceptual understanding we have. We have basically four kinds of conceptual understanding: perceptual (based on perceptions or images of stored perceptions) and esthetic (based on emotions), and then the rather peculiar forms of humor and evaluation; and each of these involves different kinds of reasoning or logic (ways of combining expressions of understood facts so that new judgments result).

I think I will take up different kinds of reasoning based on perceptual concepts first, and treat esthetic understanding and its logic next, then give a brief look at humor, and leave evaluation until last, to round out this part of the book. And what I plan to treat in perception-based reasoning is first of all the logic of statements of fact, called "formal logic," then the logic of relations and the related, or the philosophy of mathematics, and finally the logic of science, in which I will discuss why scientific method is what it is, and in the process talk about the apparently anomalous logic of induction.



Chapter 2

Logic and truth

I am not attempting to make this section a kind of mini-course in formal logic; it is rather an attempt to show the relationship between logic and statements, and why logic operates as it does, to give an understanding of what logic is, rather than to try to improve anyone's logical skills. In modern parlance, what I will be doing is meta-logic rather than logic here. I will be going through many of the traditional operations of formal logic, but the focus will be on how these operations reflect what statements are all about; and based on the relationship I see, I will offer critiques of traditional approaches and will give new formulations for some of the terms and new approaches so that things can become clearer. In that sense, this chapter should be useful for those who want to improve logical skills.

But a course in logic is something like a course in grammar. We already know how to speak; and so the grammar we have in our heads is really as complex as the one in textbooks. It is just that those who have studied grammar see why we say certain things in certain ways, and have reduced these relationships to rules, so that when we get into difficult constructions, we can understand what to say. For instance, the expression "between you and me" is correct, and "between you and I" is not, because "between" takes the objective case. Similarly, we know how to reason, but we can't necessarily spot all fallacies, because we've learned reasoning through practice and don't necessarily see why the rules work, and when apparently following them actually violates them. I have an example just below. But the point is that this particular chapter is not so much concerned with what the rules are, but why they are what they are.

Let me begin by remarking that formal logic has been thought for centuries to be the way we connect concepts or judgments; but I think a little experiment will show that it deals with statements rather than the acts of the mind that statements stand for. I have often put the following on the blackboard:

Either it is raining or it is not raining

But it is not raining

Therefore ... ?

There has always been at least one student who would answer, "Therefore, it is raining." But no one who is thinking could make such a mistake, since how could it be raining because it is not raining? The fallacy comes from knowing intuitively the rules of this kind of syllogism (Either A or B, but not A, therefore B) and from not realizing that B in this case is negative ("It is not raining"); and so if you affirm it, you have to state it as it is. That is, the reasoning goes:

Either (A) it is raining or (B) it is not-raining

But it is not raining (not A)

Therefore, it is not-raining (B)

The point, of course, is that the confusion comes from the words and the way they are arranged, and not what the words stand for; which means that logic deals with the words primarily and the judgments only secondarily, as being what is expressed by the words.

This experiment not only shows that the earlier philosophers were wrong in thinking that logic was primarily the linking of concepts and judgments, but it also tends to refute those contemporaries who seem to be holding the opposite position: that logic deals with statements, but that there is really no distinction between statements and judgments. That is, from what I can decipher of what they have written, they are modern exponents of the nominalist fallacy that what is called "thought" is simply some supposedly spiritual something behind the words; but since they hold that what is spiritual is imaginary and unreal, the only thing that thought really is is the words. I dealt in Chapter 5 of Section 3 of the third part 3.3.5 with why this position can't be held consistently with the way we actually use words.

Language, actually, involves several different types of logic: (a) grammar, which is the logic of how words go together in the language to express the various acts of the mind; (b) style, which is the logic of how words and sentences go together to unite sound, appearance, and meaning; and (c) what is ordinarily called "logic": how the sentences go together so that the last one is understood to be related to what went before.

These logics are by no means the same. For instance, the sentence, "No dije nada" (Lit. "I didn't say nothing") is grammatical in Spanish, but illogical, because if you didn't say nothing, then it is false that you said nothing, which means that you said something. Standard English grammar is logical in this respect, though the grammar of non-standard English is different. The statement I overheard recently spoken by a Black woman, "Ain't nobody never told me about that" happens to be logical. What it means is, "Nobody ever told me about that," and the negative in the "ain't" cancels the negative in the "never." But that this isn't the point, grammatically, of the multiple negatives can be shown by the statement in that same dialect, "Ain't nobody never told me nothing about that," which means the same thing, but now logically would have to mean something like, "Somebody once told me something about that." In Black English, piling up negatives simply emphasizes the negativeness of the sentence; double negatives do not cancel each other.(1)

Style is at least in part an esthetic logic, dealing with how the words sound and/or look on the page, how long phrases and sentences should be to hold attention and lead from one sentence to the next, how to avoid having words call attention to themselves instead of what they represent, how the sound even of written words (as they are "heard" by the reader) is to be kept from getting in the way of the judgments conveyed, and so on. For instance, in a book like this, for readers who are sophisticated and intelligent, long sentences like the previous one are, I think, in order, as long as they are broken up by commas, semicolons, and dashes in such a way that the ideas can be at once recognizable and flow into each other showing the very large whole that they are parts of. Whether I am successful in this, I will leave to you. The point is that the style is not the same as what is usually called the logic of what is written. Kant's style is notoriously bad; but his books are logically arranged.

Then what is it that is called the "logic" in what is being said? Let me put it in the form of a definition:

Formal logic is the arrangement of statements in such a way that it is understood that the final statement cannot be denied without contradicting what has already been said.

That is, the logic of a group of sentences is the way they back you into a corner by means of the Principle of Contradiction, so that if you agree that what is being said is true, then you have to admit that the final statement is also true, or you have contradicted yourself in one way or another.

It sounds, therefore, as if logic deals with truth. But this is not the case, actually. It deals only with the way statements are arranged, not the truth of the statements, and with the particular trick connected with the fact that some arrangements of statements demand a particular statement under penalty of contradicting themselves. Of course, if the statements are true, and if the arrangement is of the logical type, then (he said, using a logical inference) the final statement--the conclusion'--not only is true but cannot be false.

And this connection logic has with the truth of the statements is, of course, why we use it. But it must be understood that the logic itself doesn't deal with this. There can be logical inferences (operations) that generate false conclusions and are perfectly valid, and illogical fallacies that generate true conclusions. For instance, "Every German shepherd is a dog, and every dog is an insect, and therefore every German shepherd is an insect" is valid, but its conclusion is false; while "Every German shepherd by nature has four legs and every dog by nature has four legs and therefore every German shepherd is a dog" has three true statements in it, but is invalid--as can be seen by replacing "German shepherd" with "Arabian stallion."

This rather tenuous connection with truth has caused a lot of confusion in logical theory, particularly in modern times, where inferences are checked by "truth-tables," as if the actual truth mattered in the logic of what is going on. I think that instead of T's and F's in these truth-tables, the letters should be A's (for "Affirm") and D's (for "Deny") to reflect more accurately what is going on. Let me define a number of terms here, to avoid clutter:

A proposition is a statement of fact "proposed for the sake of the argument" in a logical inference.

An affirmation is the acceptance of the proposition.

A denial is the rejection of the proposition.

That is, affirmation accepts the proposition as "true for the sake of the argument," and not necessarily factually true. Thus, in the inference, "If it is raining out then the cat is inside, and the cat is not inside, therefore it is not raining out," the first proposition might be affirmed, even though it is recognized as not always a statement of the way things actually are.

Note that when negative propositions are affirmed, they are accepted as they stand. That is, if in the inference above you think that the cat is not inside, then you affirm the second proposition (you accept that the cat is not inside). Of course, the point of the inference is that if you affirm both of the first two propositions, then you can't deny the third one without contradicting yourself. So some more terms are in order:

An argument is a logical inference.

An inference is an arrangement of propositions such that the conclusion cannot be denied without either denying the premises or declaring the logic invalid

An inference is valid if the conclusion cannot be denied without denying at least one of the premises.

An inference is invalid if the conclusion can be denied without denying any premise.

A premise is a proposition from which a conclusion is drawn.

A conclusion is a proposition whose affirmation or denial depends on an inference. The conclusion is said to "follow from" the premises.

Implication is the relation of premises to the conclusion. Premises are said to "imply" the conclusion.

As the inference from the cat's behavior to what the weather is shows, logic may or may not have anything to do with the way the world actually works, depending on the actual truth of the propositions. But this does not make logic an idle game, because the world works consistently with the Principle of Contradiction, as we saw in Chapter 7 of Section 1 of the first part 1.1.11; and so, as I said earlier, if the premises are in fact true and if the inference is valid, then the conclusion must in fact be true. But establishing the truth of the premises is outside logic; and this is why within logic we only deal with propositions and affirmation and denial, and not with statements and their truth and falsity.

There are those who say that logic is only a game, but for a different reason. Insofar, they reason (using logic, by the way), as logic draws a conclusion from premises that imply it, the implication is already known before the conclusion is drawn; and therefore, drawing the conclusion is otiose, and no new knowledge has been gained by it. Presumably they say this to convince people who think that logic does lead to new knowledge that they are wrong. But if so, then why would they offer that inference? If it is valid and the premises are true, then the people they are trying to convince already know that logic gets you nowhere, and they haven't told them anything new. On the other hand, if they are expecting to have their hearers say, "Oh! I didn't realize that!" then this new insight on the part of the hearers implies that the inference is invalid or one of their premises is wrong (because something new was learned, which is impossible--on this view--if the inference is valid and the premises are true).

So I think we can safely say that there's something faulty about their position. As generally presented, it rests on the erroneous assumption (that Hume is largely to blame for, though he didn't originate it) that you can't know the truth of general propositions ("All dogs have four legs") unless you have checked all the dogs there are to see that each of them does in fact have four legs. Obviously, if you've done that, and then you say that German shepherds are dogs, your "conclusion" that therefore German shepherds have four legs is indeed a waste of time, because you've already checked all the German shepherds in getting your original premise.

But we don't get general statements in this way. For instance, on being presented with a three-legged dog, a person doesn't say, "Look at that! So not all dogs have four legs," but says, "How did that dog's leg get cut off?" That is, "All dogs have four legs" is a different kind of statement from the statement, "All the living beings in this room are human," which is understood to be false if one discovers an ant on the floor. The first is what Arthur Pap (following Nelson Goodman and Roderick Chisholm) calls a "lawlike generalization," which supports "counterfactual inferences"--or in other words, which people still accept as true in spite of instances to the contrary. The latter is not "lawlike," and is falsified if any instance contrary to it is found.

Lawlike generalizations are not in fact made by checking every instance of what they talk about. How we can make them is the problem of induction, which I will discuss later in the section on science; but on the assumption that we do in fact make general statements from incomplete observation, then obviously conclusions drawn from these general statements are not necessarily already known to be true.

And it would seem obvious that, from seeing one relationship, it does not follow that you explicitly understand all the relationships that are tied to it somehow. And it is quite possible that by rearranging words in various propositions, a new relationship among the words (and among the objects they refer to) is discovered that wasn't understood as such before. So logic can lead to new knowledge.

Of course, a meaningless "proposition" can't be true or false. This is because, as I mentioned in Chapter 5 of Section 3 of the third part 3.3.5, the meaning of a sentence is the conscious act it stands for; and so a statement's meaning is the judgment it stands for. But if it is meaningless, it can't represent a judgment, and it is only through the judgment that a statement can be true or false (even though its truth or falsity does not depend, as I said, on whether the judgment is or is not mistaken). Any statement that contradicts itself is meaningless, because it can't represent a judgment, as we saw in Chapter 7 of Section 1 of the first part 1.1.7. Note that it is not meaningless if it contradicts some known fact; in that case, it is false.

For instance, "The statement I am now writing is false" can't be a statement, because if it is false it is true (because it says it is false, and that would then make it true), and if it is true it is false (because it says it is false). I mentioned under in discussing the principles of identity and the excluded middle in Section 1 of the first part that this was a complicated problem, but that this locution couldn't be a statement. Basically, it can't be one because the judgment it would represent would be the recognition of being mistaken because one is not mistaken; and this is impossible for a self-transparent act. But there is more to it than just this.

Those who want to bypass judgments altogether and go directly from statements to facts find it difficult to deal with the distinction between falseness and meaninglessness. Remember, the truth or falseness of a statement does not depend on the judgment it represents, but on whether it expresses a fact or not; but the meaningfulness of a statement (not surprisingly, given what meaning is) depends on whether it can express a judgment. But if you don't hold this distinction, then since some apparent statements are manifestly meaningless (what could be the meaning of "The cold door sneezed a purple eyeball"?) and not false, then you have to resort to saying that they are not "well formed."

For instance, Bertrand Russell first tried to solve the problem of "This statement is false" by giving the rule that a statement cannot meaningfully refer to itself. From this it follows that the statement "The statement I am now writing is in English" is meaningless--and so presumably could not be understood by anyone. But that is silly. I can even envision a context for it. I could make a list of different languages by writing things like, "Esta frase está en castellano," "Cette phrase est en franais," "This sentence is in English," and so on, and you could figure out what each of them meant by looking at the ones you knew.

But then, as others have pointed out, what do you do with this?: "The following statement is true. The preceding statement was false." The rule was then changed to say that a statement that talks about another is in a meta-language, and it can only refer meaningfully to the language below it (not to itself or to a meta-language referring to it). Obviously, in the conundrum before us, the first statement is in a meta-language referring to the second. But the second statement's referring to the first puts it in a meta-language with respect to the first, and so it is in a higher-order meta-language, and so the combination is meaningless. But again, suppose the second statement said, "The preceding statement was in English." Is the combination now unintelligible?

Granted, the combination dealing with truth and falsity is unintelligible. The question is why. I think that making rules about meta-languages and not being able to talk about a meta-language at or above the one one is using is an ad hoc solution that by decree makes a whole series of perfectly intelligible statements "meaningless non-statements."

The meaninglessness of the two statements dealing with their mutual truth comes from the fact that the combination (as can be seen from the one dealing with English) is understood in one judgment; but the judgment it would represent is again, "I am not thinking of what I am now thinking of," and such a judgment can't be made, since the judgment is self-transparent.

But then, is the statement, "This statement is true" meaningful? There doesn't seem to be any problem if it is part of a larger statement that has some content, like St. Paul's, "...and I stayed with [the Rock, Peter] two weeks, without seeing any other Representative except James, the Master's relative. This is no lie I am writing to you. Before God it is not." Clearly, he is saying that the part of the statement dealing with his staying with The Rock is not a lie.

But if you take, "This is no lie I am writing to you" absolutely, with no context, could it express a judgment? The question then is what judgment it would be expressing. It is the equivalent of, "This statement does in fact express my judgment"; but this shows that the judgment it expresses is itself the judgment of the fact that the statement expresses it. But that fact doesn't exist until after the judgment is made, and so it couldn't know the fact until after the expression. But the statement is not self-transparent or atemporal, and so comes after the judgment as something distinct, which means that the judgment couldn't be made as to its factuality (truth) before it was actually stated.

You don't have this problem with "This statement is in English," because that statement can be true or false irrespective of the judgment of the person making it. Suppose someone, for instance, said, "Esta frase no está en castellano," not realizing that "castellano" is the Spaniards' normal way of speaking of Spanish. What the speaker meant was "This sentence is not in Castilian, it is in Spanish," but he misunderstood the words. Hence, his statement is false. As the person is making the statement, he is judging what the statement says.

This is only slightly different from judging the statement's truth while you are making it; but the difference is day and night as far as the meaningfulness of the two are concerned. The fact understood in the case of the language is a fact about the statement itself, while the "fact" understood about the truth of the statement is its supposed relation to the judgment that it expresses. But that judgment couldn't, as I said, be made prior to the "statement." Note that this applies not only to single "statements" but to combinations that refer back to themselves, like "The next statement is true. The preceding statement was true." Here, the truth of the "next" statement is known only after it is made; but since it says that the preceding statement was true, it could not be made until after the preceding statement was known to be true--or in other words, after what was known after it.

"The next statement is true" (or false) can be meaningful when a person uttering it knows what he is going to say next. And the next statement can refer back to the one now being uttered, as long as it does not refer to its truth. That is the unique case in which what is known before would have to be known afterwards and not before.

Where, then, are we? I think we can clarify Russell's "rule" with something that is not arbitrary. A statement can refer to itself meaningfully except when it is referring to its truth or falsity. When it refers to its truth or falsity, then the fact it refers to as true or false is the fact that it is expressing a prior judgment about itself. But that judgment cannot be made except subsequently to the statement. Given that the meaningfulness of a statement is that it is the expression of a judgment, this contradiction precludes statements from being meaningful if their "meaning" is supposed to be their own truth or falsity.

Conclusion 1: A statement cannot be meaningful and refer, either directly or indirectly, to its own truth or falsity.

This is a conclusion, not a "rule," because I have shown how such a statement contradicts itself; and what contradicts itself cannot be either true or false.


Notes

1. In case this is construed to be an argument for teaching Black English to Blacks, I want to point out that it argues in the opposite direction. Blacks already know how to speak their dialect; but unless they are taught Standard English, they won't realize that it expresses itself differently; and therefore what they say in their dialect sometimes means in Standard English exactly the opposite of what they are saying--which, of course, is going to make it difficult to communicate with people who speak Standard English.

This has nothing to do with whether Standard English is "right" and Black English or other dialects are "wrong"; it is a question of whether you want to communicate with others or not. If you are a member of a subgroup of the larger society, you have no grounds for expecting the society as a whole to defer to you when you speak to them; and so you have to learn the standard way of expressing yourself in that society. The Québecois in Canada have the same problem, and have tried to "solve" it by demanding that the country be bilingual; but I write this shortly after the "Meech Lake" change in the Canadian constitution (recognizing Québec as a distinct society) was defeated; and it looks at the moment as if the country is going to split up over the issue. Certainly French is a legitimate language; but whether a minority in an English-speaking country can demand that they keep their French as they intermingle with English-speakers is what the issue really is. And they rightly see this as giving them the status of a separate society within the country.



Chapter 3

Propositions and their parts

The whole discussion about the truth and meaningfulness of statements was necessary, because even though logic doesn't care whether a proposition is true or not, it must be a statement, and so it must be either true or false. And meaningless "statements" are just not statements, because they can't express a judgment.

But because propositions are statements to be arranged in such a way that they generate other statements, then it would not be surprising to find that propositions were placed in a stylized form that made them easy to manipulate.

Logical form is the form into which a statement is cast to make it a proposition easily operated on in logic.

In logical form, a proposition has three parts:

The subject of the proposition is the term that refers to a class of objects.

The predicate of the proposition is the term that expresses the proposition's meaning.

The copula is the present indicative active of "to be" used as a "link" between the subject and the predicate.

A term is a word or group of words which functions grammatically as a noun.

Note carefully that though what I am calling the "subject" and the "predicate" of a proposition are terms, they have different definitions from what I will later call the "subject-term" and the "predicate-term" of a categorical syllogism (which are the terms that form the subject and predicate of its conclusion, though they may not be so in the premises). I want to mention this early to minimize confusion as much as possible.

Let us look, then, at the difference between a proposition and a statement. A statement, first of all, has only two parts, not three; if there is a verb "to be" in it, this is part of the predicate, not a "link" between subject and predicate. In a statement, as in a proposition, the subject is the word-group that calls to the mind of the hearer the object(s) he is supposed to be seeing a relationship among or within; but in the predicate of a statement, a verb is always included, indicating the act of this subject--in its relation to itself or to other acts. Thus, "John talked for two hours" refers to John and means the act of talking (i.e. he did what other people do when they make articulate sounds, and he did it for this length of time). So the hearer understands the fact that John performed this act, which he understands as in the past, as the certain kind of act, as lasting that long.

Statements often cannot be taken over into logic as they stand, because stylistic considerations often cast them in a form which is clear enough, but which does not easily reveal how they go together to generate conclusions. Even when the progression of statements is obviously logical, the word-groups get transformed in the process (repeating phrases exactly is generally bad English style), and the logic is rather connected with the progressions of meanings rather than staring at you out of the words.

For instance, if a person said, "I can't stand John; he talked for two hours, and I hate people who do things like that," it is perfectly clear from the meaning that the first statement is a conclusion based on the combination of the last two. But "John" gets changed to "he" and "talked for two hours" to "do things like that." On the other hand, saying, "John is a long-winded person, and a long-winded person is a person I can't stand; and so John is a person I can't stand," it becomes perfectly clear that it would be self-contradictory for the speaker to say that he "stands" John.

Logic, then, has two functions: (a) to allow the easy manipulation of statements so that new statements can be generated, and (b) to reveal how the statements are being manipulated so as to test whether the logic is valid or not.

Notice that if the propositions above were arranged, "John is a long-winded person, and a person I can't stand is a long-winded person," the proposition, "John is a person I can't stand" doesn't follow, any more than it follows from "Horses have four legs and dogs have four legs" that horses are dogs. We will see why this is so later; the point I am making here is that the statements in their normal form can mask fallacies like this; and that is why propositions have a special form.

As can be seen from the valid inference above, the term, "a person I can't stand" is used as the predicate of one proposition and the subject of another; and it is this manipulation of subjects and predicates of propositions that is what Aristotle gave to the world as the "categorical syllogism." But we will, as I said, see this later. Our focus right now is on what the parts of a proposition are.

First of all, a term can be a single word, as long as it is a noun or pronoun (which functions grammatically as a noun); or it can be any group of words that performs the same function: a phrase, a clause, a complex of linked clauses, or what have you. Something like "he" would be a term, as would "John"; and so would, "The Queen of England" or "The red-haired man who is stooping down to pick up the package of Tums he just dropped out of his shopping bag."

One thing to note carefully, especially if the term is a single word.

Rule: The same word can be different terms, depending on the class of objects it refers to in the context of its use.

For instance a "pen" that you write with would be a different term from a "pen" that you keep pigs in. These are traditionally called "equivocal terms," but in my terminology, they are equivocal words, and are simply not the same term in any sense. The case is similar with analogous words; they are not analogous terms, but different terms, even though the analogous words have a common core of meaning. Thus, it is illogical to say, "My complexion is healthy, and what is healthy eats well, and so my complexion eats well" because "healthy" in the first sense means "a sign of health" and so refers implicitly to a different set of objects from healthy living bodies. Similarly, if words are taken in two different suppositions, as we saw in Chapter 4 of Section 3 of the third part 3.3.4, they are different terms; and this is why it is illogical to say, "Clint Eastwood is a star and star is a four-letter word; and so Clint Eastwood is a four-letter word."

Now I said that a term refers to a class of objects, which seems to imply that the word-groups above would not be terms, because they only refer to one object. But first of all, individual references aren't terribly productive in logic, and so aren't much used; but since they can be, the convention is to regard them as classes that have only one member in them--and so when you talk about John, you are talking about the whole class of John--that is, not all the people named "John," but the "class" which consists of this individual John; because it turns out that individual objects function logically in the same way as classes taken as a whole, as we will see shortly.

Terms, as nouns, have two possible functions: (a) they can refer to a class of objects, or (b) they can express what the relationship is among the objects in the class. The first I will call the reference-function of the term (its traditional name is the term's "denotation"); and the second the meaning-function (traditionally designated as its "connotation"). For instance, in Every human being is mortal" the term "human being" is being used in its reference function, to call up the class into your consciousness, and what is being said about it is the fact that it eventually dies. But in "Every being that can laugh is a human being," the term is being used in its meaning-function so that you understand that the beings that can laugh have also the relationship that defines what human beings are. In other words, as I said in Chapter 5 of Section 3 of the third part 3.3.5, the subject uses the word to bring up an image, and the predicate to bring up a concept to the hearer's mind. This is also--not surprisingly--true in logic; but there are oddities about logical form, because we want to be able to use the term (in different propositions) in its two different functions.

Many contemporary logicians think that the meaning-function of a term is adjectival, because it expresses a "quality" or "property" the subject has; and, of course, this is very often the case (the common quality whose possession relates by similarity all the objects in the class). But first of all, there are other relationships besides similarity, and secondly, predicate adjectives only "mean" and can't grammatically be used in a reference-function (You can't say, "Every blue is ..." unless you are thinking of it as a noun: "every blue thing is ..."). Hence, to exploit the double functions of terms, we want them always to be nouns.

And precisely because the term, even when used in its meaning-function, also has (in itself) a reference-function, which we might want to exploit, logic tends to concentrate on the reference-function. This is the reason why I said that terms are different depending on the class of objects they refer to, rather than that they are different depending on whether they have the same meaning. But because terms are sometimes used in propositions in their meaning-function, then (as we will see) it becomes tricky to say what objects they would be referring to at that moment if they were referring to objects rather than expressing meaning. But to discuss this, we have to discuss subjects and predicates.

Now as I said, the subject is the term used in its reference-function, to point to the class of objects in question.

But in this pointing, you may or may not be pointing to the whole class, and so we have to make a distinction here. Strictly speaking, when you are referring to the "whole class," you are not referring to the class as such, but to every member of the class; and when you are referring to "part of it," you are not referring to a sector of it as if it were a pie you cut up, but to a number of the members of it. Logicians talk about "extension" of the term, (or its "quantity") and refer to the "distributed" or "undistributed" use of the terms; and call the "distributed" use (dealing with every member) the "universal term" (or the term used "universally") and the "undistributed" use the "particular" term (or the term used "particularly").

But it turns out that this only leads to confusion, because ordinary language uses "particular" to mean "individual" (as in "that particular man,") and the logical meaning of "particular" is that the reference is to some indefinite number of members of the class. "That particular man" meaning "that individual" would then be a universal term, as would "All of the students except these twenty-five."

So I think that the traditional terminology is misleading. If you know logic, what you realize is that references to definite individuals behave differently from indefinite references to individuals; and so let us call them by the names that show what we're talking about.

A definite term is a term in which the objects referred to can in principle be designated.

The word every is the primary sign of a definite term.

An indefinite term is a term in which the objects are known only in relation to the class they belong to.

The phrase at least one is the primary sign of an indefinite term.

I chose "every" instead of the traditional "all" for the definite reference (the "universal" one), because "all" can refer to the class as a whole (collectively) and not each member of it (distributively), as in "All the students weighed exactly one ton," which certainly means something different from "Every student weighed exactly one ton." Further, there is an ambiguity in definite negative statements using "all." For instance, does "All children are not mature" mean "Not all children are mature" (i.e. some are not mature) or "No children are mature" (not one is mature)? Grammatically, it should mean the former, since the proper form to deny a definite negative proposition is "No X's are Y's," while "Not all X's are Y's" simply denies the "universality" of the subject's reference, and "All X's are not Y's" is ambiguous. But "Every X is not a Y" is clear, as is "Not every X is a Y"; and these are the only forms possible using "every."

I chose "at least one" here instead of the traditional "some" because in ordinary language, "some" means "some are and some aren't," whereas the indefinite reference doesn't necessarily exclude the possibility that you might be referring to every member of the class; it's just that you don't know whether you are or not; and "at least one" has this connotation of not excluding "every." Hence, "At least one palomino is a pony" is obviously true, while it is not intuitively obvious that "Some palominos are ponies" is true if you happen to know that there is no instance of a palomino that isn't a pony.

Further, if you say, "At least one X is a Y," then you're grammatically in the same form as "Every X is a Y"; whereas if you say "Some X's are Y's," you're using the plural, and it's not as easy to leave the terms unchanged as you manipulate the propositions'--and logic becomes clearer the less you modify the terms themselves.

Rule: The subject of a proposition must be preceded by "every" or "at least one" to indicate whether it is a definite or indefinite term.

The proposition is called definite or indefinite depending on whether its subject is definite or indefinite.

So the statement "John spoke for two hours" becomes "Every John is something that spoke for two hours." Since John is one definite person, the reference is definite (and so the proposition is a definite proposition); and this means that the definite reference "every" must precede it. It makes it clear that John is now a class, and that every member of the class (the one member, John) is what spoke for two hours.

You are going to lose some information in this transformation of statements into propositions; the point is that you won't lose any logically relevant information; and once you have done your logic, then you can always "substitute back" if you do it carefully enough and get back a statement that looks more like standard English. For instance "All of the students but one" translates into logical form as "At least one student," because the words don't indicate which student was left out, and so you can't "point to" the ones that are left. "All the students but this one," however, is definite, and would probably translate simply into "Every student" if it were clear that what "student" now referred to was the subgroup.

Logic's purpose, as I said, is to make manipulation easy; and so the transformation process should not obfuscate more than is necessary. If it is clear what is being referred to, then let it stand; if not, add as few words as possible to the term to make it clear. For instance, if there might be a confusion between the students in the class above, then you could say, "Every sub-student" or something, defining "sub-students" as "The students in the class minus John."

In order to do the transforming, you have to know what sorts of words in ordinary English indicate definite and which indefinite references. Since this chapter is about logic rather than being a textbook on logic, I will simply mention that definite references are words like the following: this, that, these, those, the, all, any, every, each'--and a, when it means "any example of," as in "A horse is an animal." Indefinite references are indicated by the following: at least one, some, one, ten (or any number without "these") many, part of, a few, all but one (or any number without "these")'--and a, when it means "some unspecified one of" as in "A man spoke to me." Note also that "not every" as in "Not every dog has fleas" is an indefinite term; it is not the logical equivalent of "Every dog does not have fleas."

I suppose I have to say a few words here about contemporary logic and references. Bertrand Russell says, I think, that a statement like "The present king of France is bald" is meaningless, because if "the present king of France" is to be taken as a substitute for a proper name (such as Louis XVIII), then it is a term that refers but has no referent (because there is no king of France at present).

What I gather Russell's position is based on is a kind of naive realism where an "object" referred to is not a being but something like a "shaped patch of color" or in other words whatever "out there" (the set of energies) produced the percept.

Subjects of statements, therefore, can only be proper names, because they merely point and so there is no understanding connected with them at all. And since, I think he is saying, they point, then obviously (according to Russell) they have to have something to point to, or something that directly affects some sense organ. Everything else is actually a predicate. Hence, all objects allegedly pointed to by common nouns, such as "the human being over there" are actually implicit statements, of the form "There is an X such that X is a human being and X is over there," where X is simply a "place-holder" and "human being over there" are predicates describing X.

For him, references to every member of a class of objects have to have this form, because obviously you can't "point" in any meaningful sense to every single human being, say, or every dog. Hence, for contemporary logic, the proposition "Every human being is something mortal" becomes a hypothetical inference: "(For any X) if X is a human being, then X is a mortal thing."

Presumably, you are simply declaring this to be true, because I don't see how you could predict it unless you had actually checked out, not only every single human being, but absolutely every object in the universe, to see that all of them that were human beings also had the property of mortality. If this is true, then the little "place holder" (for any X) is just terminologically different from (for every X), and this has to be the equivalent of "every single thing there is or could be."

The interesting thing here is that contemporary logic's view of the inference in question allows it to be valid when the "if" sub-proposition is false; and so it is true (they say) that every unicorn has no more than three legs because there aren't any unicorns. That is "(for any X), if X is a unicorn, then X has three legs" is true for these people because There aren't any X's that are unicorns, which means that "X is a unicorn" is false, making the inference valid (and so the statement true).

Note that if you accept this way of looking at things, then it doesn't follow that if every human being is mortal, at least one human being is mortal. Why? Because the indefinite proposition can't "point to" an empty set of objects (since it says "at least one" is something or other), while the definite proposition can, on this theory (because as hypothetical it doesn't point, but only links two predicates). For example, it doesn't follow from "Every unicorn has three legs" on this view that "At least one unicorn has three legs," because "every unicorn has three legs" is true given that there aren't any unicorns at all, while "at least one unicorn has three legs" isn't true because there aren't any unicorns.

But that's silly.

I will get to the business of the validity of a hypothetical inference later; but the reason why definite propositions ("universal" ones) are turned into hypothetical inferences is basically the same as Russell's claim above, that any reference can't do anything but point and therefore must be a pure demonstrative (like "this" or "that" or some proper name) without any content that could be understood, since that would make it a predicate.

But this would make it impossible to say what you are talking about when using a common noun like "human being," because as having content it merely means the "quality" of humanity, and doesn't point to a class of objects. If you are pointing, you'd say, "That's a human being, and that's a human being, and that's a human being..."; but then if you say "the human beings" you're not re-pointing at the objects you pointed out, you're saying "These exist, and they are human beings."

Now granted, you can transform any subject of a statement (if it isn't a pure demonstrative) into the predicate of a different statement that has a pure demonstrative for its subject (or into a hypothetical inference); but that's a far cry from saying that that's what subjects of statements "really are," and that common nouns merely express concepts and don't also link up to a generalized image and so point to a class of objects.

That is, I when you hear, "Every human being is mortal," you don't go through a little process in your brain searching through its files and first picking out all the X's that you understand as human beings and giving each of them the additional quality of being mortal. When you hear the statement, the word "human being" simultaneously links up to the generalized image you have of a human being (that set of nerves that is activated when you see human beings, which could be conjured up sensitively as a generalized image), and recalls the concept. Since in this case, the concept is not relevant, you ignore it and understand that everything referred to by this generalized image has also the relationship of mortality (which is the concept you understand in this judgment). To put it another way, you don't understand anything at all about the meaning of humanity in this proposition; the only meaning it has is that of mortality.

If Russell's view is true, then when a woman says, "Those three people stole my purse!" she is really making the compound statement, "Those are three and they are people and they stole my purse." Even if she might be trying to convey that there are three of them, clearly she has no interest whatever in making the policeman understand that they are people, and anyone who claims that this is what she is really saying simply doesn't know how we use language. For a woman in this situation, "Those three people stole my purse" is exactly equivalent to "Those three stole my purse," or even "Them! They stole my purse!" The only function of "three people" here is to make the pointing more accurate--to single out the objects pointed to verbally from the background information that is also coming into the policeman's eye.

I rest my case.

Let me therefore make the following rule:

Rule: For logical purposes, it is to be assumed that classes referred to are not empty.

That is, if you are using a term in its reference-function, then for logical purposes it refers to something; and when it is used in its meaning-function it potentially refers to something for logical purposes. The only way you could know whether a term (like "unicorn") actually referred to something or not would be to check your experience; and this is extra-logical verification, something that we are precisely trying to avoid by using propositions instead of statements. Propositions are only "proposed as true for the argument," not stated as true in fact. The only verification we are interested in in logic is that connected with the "verification" of the conclusion based on the validity of the logic; and even here, whether the conclusion is factually true (i.e. true as a statement) is irrelevant, but only whether it follows from the premises (i.e. whether it must be true if they are statements of fact). Remember, in logic, we affirm and deny, we don't "recognize the truth" of something (even though, in using logic, we recognize the truth of the conclusion based on the knowledge that the logic is valid and the premises are true statements).

Hence, the proposition, "Every unicorn is something that has four legs" is not "taken to be true" (if you affirm it) because it's a fancy hypothetical inference, but because if you're going to be talking about unicorns, the rule above says that you're talking about something. But by the same token, the proposition "At least one unicorn has four legs" need not be denied (and in fact cannot, as we will see, be denied if the definite proposition is affirmed).

So much, then, for the subject of the proposition. The logical form of the copula is simple: it can be either affirmative or negative. Hence, there are only a few forms that the copula can take: am (am not) are (are not) is (is not).

The proposition is affirmative or negative depending on whether the copula is affirmative or negative.

That should be obvious. But note that a definite negative statement can look like an affirmative proposition with a negative subject: "No horse is a dog" is a statement that translates into the proposition "Every horse is not a dog." I mentioned this, if you will recall, when I was justifying my choice of "every" and "at least one" as the reference-indicators (the "quantifiers," in traditional terminology).

This points up the fact that negatives can appear all through the proposition--in the subject, in the copula, and in the predicate; but the proposition is negative only if the copula is negative, irrespective of the negativeness of either the subject or predicate or both. For example, "At least one non-horse is a non-dog" is an affirmative proposition, as would be its expansion, "At least one thing that is not a horse is something that is not a dog." Even though "not" appears here, the "not's" are in clauses forming parts of the subject and predicate respectively, and do not affect the copula.

Tense is not included in the copula; the present is actually an "aorist," or timeless use of the verb. Any tense in the statement has to be translated into a clause or phrase in the predicate.

The predicate of the proposition must be recast as a noun so that it can be used as the subject of a different proposition. Traditionally, the predicate could be a noun or an adjective; but adjectives cannot be used as they stand as subjects of other propositions, and so they should be ruled out as predicates. Thus, "the cars are all red" translates into "every car is something red." For instance, as we will see shortly, from "every car is something red" you can get "At least one red thing is a car"; but "Red is a car" obviously doesn't follow. (Note, by the way, that "every car" in the proposition refers to the definite class of cars referred to in the "the cars" of the sentence.)

It will not necessarily be obvious what words in a given sentence are to be translated as the subject of the proposition and what is to be included as the predicate; this will depend on what you think the sentence means.

For instance, "Fourscore and seven years ago, our fathers brought fourth upon this continent a new nation" can be translated variously depending on whether you think Lincoln was talking about the fathers (and meaning what they did) or what the fathers did, or what they "brought forth" or when they did it. Thus, depending on your interpretation of the statement, the proposition might be "At least one of our fathers is something that brought forth...," or "Every thing [i.e. that definite thing] our fathers did fourscore... is an act of bringing forth...," or "Every thing our fathers brought forth fourscore and seven years ago upon...is a new nation," or "Every fourscore and seven years ago [that definite time] is the time when our fathers..." By my reading of the speech, what Lincoln was driving at was the third meaning; but in different contexts, the others might also be legitimate renditions of the statement. The point is that it is not cut-and-dried.

Obviously also, once you translate a statement into a proposition, it is apt to look funny.

But since the translation's function is to make it easy to do logic, then a complex statement like Lincoln's, with all of its qualifying phrases and clauses, should be reduced to the simplest form possible consistent with not losing anything that is logically relevant. Hence, "Every brought forth thing is a new nation" would probably serve as a reminder of what the statement is; and once the logical manipulation is over, the reverse substitution could be made to make a statement out of the conclusion, referring back to Lincoln's statement rather than the original proposition.

To take one more example, the statement of Jesus is traditionally translated into English as "Blessed are the poor in spirit," where the Greek word-order is used, and the subject of the statement comes last. As a proposition it would read "Every poor in spirit thing is something blessed."

But how do you know what reference to give the predicate term? It doesn't really seem to have one in the propositions I have stated so far.

Ah, here is one of the secrets of logic. Textbooks will give you rules indicating the "extension" of the predicate. I will also give them, and reveal the mystery of why the rules are what they are.

Rule: If the copula is affirmative, the predicate is indefinite; if the copula is negative, the predicate is definite.

So you don't need a word to indicate the reference of the predicate, because it is determined by the "quality" (affirmativeness or negativeness) of the copula.

Now why is this? Because, as I said earlier, the predicate doesn't actually refer to a class of objects, but to the relationship among the members of the class; and so it expresses the meaning of the proposition, not its referent. But since we want to be able to use the predicate as the subject of a different proposition, then we have to know what it would refer to if in fact it were pointing out a bunch of objects: is its pseudo-reference definite or indefinite?

In the proposition, "Every horse is a four-legged thing" what you are saying is that if you take any horse out of the class of horses, you will find that it is similar to anything that is four-legged. Then what does this say about horses and the class of four-legged objects? It should be obvious that it doesn't say that horses are the only four-legged objects there are; and so horses form an indefinite subset of the class of four-legged things.

Note, by the way, that an indefinite subset could be the whole set; it's just that you don't know this by what is actually said in the proposition. For instance, "Every proposition is a statement in logical form" has an indefinite predicate, even though there are no statements in logical form that aren't propositions (since the definition of "proposition" is "a statement in logical form"). But you don't know from the proposition that it is a definition.

Negative propositions, even indefinite ones, as I said, have definite predicates, because if the subject does not have the relation in question, then it belongs outside the whole class of objects the predicate would be referring to (it wouldn't be like any member of the class that has the relationship). Thus, "Every horse is not a dog" indicates that every single member of the class of horses is outside the class of dogs (since no horses have "dogginess" and every dog does). But even "At least one horse is not a palomino" means that at least one horse is totally outside the class of palominos (though there may be some other horses that are not).

The easiest way to remember this rule is that it is the opposite of what you would superficially expect. Since we tend to think of definite references as in some sense the "good" ones and affirmative copulas as the "good" ones, we would tend to infer that affirmative copulas "ought to have" definite predicates. But, as I said, the exact opposite is the case.



Chapter 4

Operations using a single proposition

That, then, is what the proposition looks like. Now is there anything we can infer from a proposition as it stands? It turns out that there are a couple of things.

First, there is the operation called conversion, in which you interchange the subject and the predicate, drawing, in other words, an inference about the class of objects implicitly referred to by the predicate, based on the meaning implicit in the subject.

Let me define some terms, and then give the rules for this operation and say a little about them.

Conversion is the logical inference involved in interchanging the subject and predicate of a proposition.

The converse of a proposition is the conclusion that results from conversion.

Rules for conversion

1. Leave the copula alone.

2. Interchange the subject and the predicate.

3. Check to see that the new subject has the same reference (definite or indefinite) as was implicit in the old predicate.

4. Check the implicit reference of the predicate against the reference it had as subject. If it is not the same:

a. If the term became indefinite from being definite, this is permitted.

b. If the term became definite from being indefinite the inference is not valid.

Since you can control the reference of the new subject, then you just explicitly give it the reference it implicitly had as the old predicate, without changing it. This is the point of Rule 3.

Rule 4 is based on the fact that you can't control the reference of the new predicate, because it doesn't depend on what it was before, but on whether the copula is affirmative or negative. Hence, this might necessitate a change in its implicit reference. And you can't conclude to a definite reference (knowing what the objects referred to actually are--being able to point to them individually) from an indefinite one (which supposes that you don't know which ones they are).

That's why you can't "conclude to the universal from the particular"; it's not that you're concluding to a larger class from a smaller one, it's that the "particular" is known not as an object, but only indefinitely, in its relation to the class (i.e. as belonging to it, not as "this thing"); and you can't point to something if you couldn't point to it before.

To take a couple of examples, "Every horse is an animal" converts into "At least one animal is a horse." "Every man is not an island" converts into "Every island is not a man." "At least one human being is a typist" converts into "At least one typist is a human being." But note that "At least one human being is not a typist" can't be converted, because "human being" would become the predicate of a negative proposition, and so definite; but it was indefinite before.

So essentially, what Rule 4 says is that indefinite negative propositions can't be converted.

There is one other thing to beware of in converting propositions: you must not base your conclusion on what you happen to know is true of the proposition as a statement, but only on the proposition as it stands. That is, you might know that only human beings can laugh (because spirits have no bodies, and non-human animals can't understand, and so their laughing sounds aren't real laughter). Hence, you might be tempted to convert the proposition, "Every human being is a laughing thing" into "Every laughing thing is a human being." But this doesn't follow, even though it happens to be true, because "human being" is now definite, whereas before it was indefinite. You can see that it doesn't follow from substituting "mortal thing" for "laughing thing."

The other operation with a single proposition changes its "quality," or the affirmativeness or negativeness of the copula.

Obversion is the logical inference involved in changing the copula from affirmative to negative or vice versa.

The obverse is the conclusion of an obversion.

Rules for obversion:

1. Leave the subject alone.

2. Change the copula from affirmative to negative or vice versa.

3. Add a negative to the predicate term.

4. Cancel pairs of negatives.

In Rule 3, the negative added to the predicate (a "non-" if it is a single word, or a "not" in some clause within it) is to make it "refer" to the contradictory class of objects from the preceding predicate (i.e. to the class of "everything else but" that one). Here, there is no need to worry about the predicate's changing from indefinite to definite (as it will if the original proposition was affirmative), because it is a different predicate, and the new predicate doesn't "refer" to the original class at all, but to an entirely different set of objects.

For example, the obverse of "Every human being is a mortal thing" is "Every human being is not a non-mortal thing." Here, the class of non-mortal things is definite, while that of mortal things is indefinite; but as you can see intuitively, if every human is within the class of mortal things, this will put every human outside the class of "everything but mortal things." Similarly, the obverse of "Every human being is not an island" is "Every human being is not not a non-island," which, by Rule 4 becomes "Every human being is a non-island." In this case, every human's being outside the class of islands automatically puts every one within the class of non-islands.

The indefinite propositions are the same: The obverse of "At least one human being is a typist" is "At least one human being is not a non-typist," and similarly, "At least one human being is not a typist" becomes "At least one human being is not not a non-typist, and, canceling the double negative, "At least one human being is a non-typist."

The only fallacy to watch out for in obversion is using the contrary class instead of the contradictory. The contrary is the opposite class on the scale, as in white is the contrary of black; and the assumption in contraries is that there are things in between. Contradictories exhaust the whole universe, as non-black is the contradictory of black, and involve all the other objects there are, whether they are in the category in question or not. For instance gray things, red things, things weighing two pounds, dogs, and even nothingness are all included in the class of "non-black."

Note that two obversions in a row get you back where you started, while this is not the case with conversions, since the references change because of the shift from subject to predicate.

Logicians talk about other operations such as "contraposition," but these are just alternate conversions and obversions, and have nothing special about them. They do, however, show how many different propositions you can generate just from one original. For instance:

"Every human thing is a mortal thing" obverts to

"Every human thing is not a non-mortal thing," converts to

"Every non-mortal thing is not a human thing," obverts to

"Every non-mortal thing is a non-human thing," converts to

"At least one non-human thing is a non-mortal thing," obverts to

"At least one non-human thing is not a (non-non) mortal thing," which cannot be converted, because "non-human thing" would become definite from being indefinite.

This is perfectly straightforward, following the rules; Note that we said "non-mortal" and not "immortal" (which is the contrary of "mortal"), because stones are not mortal, since, not being alive, they can't die, but by the same token, they're not mortal either. Note also that it takes a good deal of puzzling to think out whether "At least one non-human thing is not a mortal thing" actually follows from "Every human thing is a mortal thing" or not. That is, is it actually the case that if in fact every human being is mortal, it can't be false that there's a non-human that isn't mortal? (Always supposing, as we said earlier, that there are humans, non-humans, mortals, and non-mortals. The rule is that all classes have at least one member. It does not follow, of course, from the mere fact that every human being is a mortal thing that there are any non-mortal things.)

Logicians talk about the "Square of Opposition," which consists of the four possible propositions with the same subject and predicate: that is, the definite affirmative, definite negative, indefinite affirmative and indefinite negative. These propositions are related among each other in interesting ways; but I will discuss them after we discuss the various ways of joining propositions into a compound proposition (because the "square" happens to embody all the ways you can join two propositions into a compound).



Chapter 5

Compounding propositions

The preceding inferences were not called "syllogisms" because they involve only one premise, and "syllogism" is the Greek word for "a combined statement."

A syllogism is an inference with two premises.

As long as I have defined this, here are a couple more terms:

An enthymeme is a syllogism with one proposition not explicitly stated.

Enthymemes are often the way we reason in ordinary language, because the statements that are left out are so obvious that it insults the intelligence of the hearer to state them. In the informal use of logic, we also tend to put the conclusion first (as I am doing in this sentence), because we want to let the hearer know right away what we are driving at, and then give him the evidence for it. So you would say, "John is going to die, because every human being dies," rather than, "John is a human being, and every human being is something that will die, and so John is something that will die." You don't need to say, "John is a human being," because your hearer knows what you are referring to (not to some dog named "John"). Enthymemes can also leave out the conclusion, as obvious. You might say, referring to John's propensity for living dangerously, "Well, he's human, after all, and all of us have to die sometime." It would be insulting to your listener if you then said, "and so he has to die sometime too."

A sorites is a chaining of several syllogisms or enthymemes.

You might give this hypothetical sorites to someone, for instance: "If you try drugs for fun, then you might start doing drugs, and if you do drugs, then you're going to become an addict, and if you're an addict, you've got nothing to live for but drugs." In the informal use of logic, this would usually be followed by "Then why try drugs for fun?" which points first of all to the omitted conclusion, "If you try drugs for fun, you're going to have nothing to live for but drugs," and the following evaluative inference, "If you don't want to have nothing to live for but drugs, then don't try drugs for fun."

Now then, what are the ways we can combine two propositions so that we can generate a conclusion from their parts?

Let me first state a general rule that can be helpful, now that we are not at the moment getting inside a proposition and looking at its parts:

Rule: For purposes of combining whole propositions, a statement in any form is taken as a proposition.

That is, there isn't any special logical form for statements as components of compound propositions. This will not be true for the categorical syllogism, because it is precisely the way of compounding propositions because of the characteristics of the subjects and predicates of the combined propositions. But other types of syllogisms don't have to worry about how the components look.

Let me also make a couple of definitions to make what is going on in logic a little clearer.

The inferential mode of reasoning affirms the compound and affirms or denies one of its components, and concludes to the affirmation or denial of the other.

The refutational mode of reasoning affirms or denies each of the components and concludes to the affirmation or denial of the compound.

There may, of course, be more than two components in the compound proposition. "Either you're asleep or you're thinking of something else or you're stupid" is a perfectly legitimate compound proposition, for instance. In these cases, the rules for the compound with two components apply mutatis mutandis, and so I'm not going to discuss them further.

The reason why I called the second mode of reasoning "refutational" is that, as we will see, the inference from the components to the compound is only valid in proving that the compound is false, because the alleged connection between the components is not what the compound says it is. To understand this, we have to be clear about what the criteria are for a valid inference.

We need a couple of other terms:

An inference is sound when the premises are factually true statements, and they generate a conclusion which cannot be factually false. Otherwise, the inference is unsound, even if the conclusion happens to be true.

This is what we ordinarily mean by the "validity" of an inference, because we see no reason for a person giving premises which he doesn't think are true (i.e. they may be negative statements, but he considers them true, or why say them?). But the validity is something more hypothetical.

An inference is valid when, if the premises are true, the conclusion cannot not be false.

So for the logic to be valid, the premises don't have to be true, but when they're not, if they were, the conclusion would be true.

Criterion for a valid inference (contemporary): An inference is valid if when stated as a conditional proposition, it is true for all truth-values of the components.

This is another way of saying that in contemporary logic, the inference is valid its expression is a tautology. By "tautology" here is not meant simply "the same term is repeated," which is what we ordinarily mean by "tautology," such as "a blue bird is blue," or by its definition, such as "a valid inference is a potentially sound inference," but also such statements as "George Blair is not anything but George Blair." That is, any statement that fits the Principle of Identity we discussed in the Chapter 8 of Section 1 of the First Part 1.1.8 is what we ordinarily mean by a tautology (it says the same thing); but tautologies also apply to statements which fit the Principle of Contradiction (it amounts to the same thing).

Now contemporary logic talks about two kinds of fallacies, which mean that, in their system, the definition of soundness I have given above is not accurate. Contemporary logic's definition of soundness is "If the premises are true, the conclusion cannot be false." But "true" in contemporary logic does not mean exactly the same thing as what I mean by a "factually true statement."

The two kinds of fallacies can clear up what I am talking about. A formal fallacy in contemporary logic occurs either with a false premise or a violation of a logical rule. An informal fallacy would be using a word in "two different senses" (taking the same word as the same term, in my terminology, when in fact it is two different terms); or by concluding to something that was irrelevant to the premises--something that you could only discover by looking at the sense of what you were saying rather than the form as defined in contemporary logic.

Thus, for instance, to argue from the fact that George Bush is in the White house and the other fact that my feet hurt to "George Bush is in the White House and my feet hurt" is a sound argument in contemporary logic, because, given the truth of the premises, the conclusion can't be false.

But a person could say, "But Bush's presidency has nothing whatever to do with the state of your feet," meaning not that either of the two statements was false, but that the fact that each is true does not mean that you can conjoin them. Hence, the person would contend, it is false to join them into a single statement as if together the expressed a fact, when they in fact express two distinct facts.

In that sense, the premises can be true and the conclusion false. "But that isn't what we mean by 'false,'" the logician would say, "because 'and' as we use it does not say that both statements together form a statement of a fact, but merely that each expresses a fact. And, of course if each expresses a fact, it is sound to say that each expresses a fact (that is, it is a fact that each expresses a fact)." So the inference is sound.

Now what I am going to try to show in what follows is that in each case, if you take the meaning of the connective to be solely its logical function, then there is no occasion for anyone ever to utter as a statement of fact the proposition using the connective in this way. So factually, the inference is not sound.

And what I will conclude from this is that, even if logic as defined in contemporary terms is internally consistent, it has no application to statements of fact, because as statements of fact, its compound propositions (including the statements of its inferences) are "statements" that no one could have any reason for uttering in the sense contemporary logic intends them.

Let me here define what I mean by the logical function and the meaning of a connective:

The logical function of a connective combining statements (or propositions) is the indication of what is to be done with the statements connected.

The meaning of a connective is how the facts stated by the statements are interrelated.

For various reasons, some logicians who still hold that logic deals with the world "out there," like Bertrand Russell, for instance, have problems with "connected facts." But since, if you refer back to Chapter 6 of Section 5 of the first part 1.5.6(not to mention what leads up to it), for me a fact is a connection among objects (spelled out in terms of knowledge a little more in Section 3 of the third part), then I am not going to bother with trying to establish that there can be "factual interrelations."

I gave one example of the difference logical function and meaning with "and," which I will discuss more at length below; but just to be clear about it, let me say that the logical function says that each component must be affirmed (i.e. accepted as stating a fact), and the meaning adds to this that the two are somehow connected. To take another example, the statement, "If Chicago is in Illinois, then I am getting gray" illustrates the connective called the "implication." You are obviously bright enough to see pretty clearly how "if...then..." functions logically as a connective; but the reason why the statement sounds strange is of course that beyond this logical function, the connective also means "the second statement's being a fact depends in some way on the fact expressed first." Clearly, there is no dependence in the example.

My position is that the logical function of a connective is not divorced from its meaning, but included within it, so that if the logical function is violated, the connective is wrongly used (is false). But since the meaning goes beyond mere logical function, then the connective can be false and still used properly in its logical function.

Further, I contend that the logical function is derived from the meaning (i.e. depends on it) and is not just an adjunct to it;(1) that is, it is because facts have certain interrelations that statements have certain connections and not others, and you can't just stick in any connective you want at any given time and still hope to be describing the real world. For instance, the implication in statements occurs because effects really depend on causes for their existence, and we know this. That effects really depend on causes is the whole point of the first part of this book from Section 2 on; that we know this is the burden of Section 5 of that part and Section 3 of the third part. If there were not a connective such as "if...then...," we would have to invent one.

So what is at the base of my problem with contemporary logic is its epistemological stance that says that you can refer to the real world without taking into account the meaning of the connectives--or even worse, that the language is simply self-contained, referring to nothing outside itself, in which case to use it to critique the logic of what anyone else says is like criticizing a statement in French which happens to use words that look like English on the grounds that it doesn't make sense in English.

Of course, by that token, I would not be "allowed" to criticize what is said in contemporary logic because it doesn't make sense in my logical system. But that's only forbidden for a person who buys into the idea that a system can't apply outside itself, and I simply deny this for the same reason that I deny relativism, as I said in Section 1 of the first part. For a person within a self-contained system to issue a "rule" that criticism of his system from outside is invalid or illegitimate obviously contradicts the self-containedness he demands for his system (because he's criticizing some system outside his).

My contention is that there is a logic of statements, which may or may not be very complex and only approached by any known system of formal logic; but formal logic is an attempt to discover and formulate this logic. Hence some logics are better than others because they more accurately express more of how we in fact reason when we connect the expressions of our acts of understanding to generate what we realize are new relationships between objects from old ones.

If someone disagrees with this and

Should say,"That is not what I meant at all.

That is not it, at all."

my answer will be, I dare to eat a peach. Let her go her way, like the skeptics and the relativists of Section 1 of the first part. "And turning to [the reader] he said, 'Do you want to go away too?'"

This is not to say that I find contemporary logic inapplicable. As I said, the logical function of connectives is contained within their meaning; and so insofar as the connections between what is said depend on the logical function of the connectives, that version of logic will apply to it, and since anything connected depends at least on the logical function of the connective, then what violates contemporary logic (what is invalid in it) will be invalid for statements also; but there will be things that are allowed in contemporary logic that are fallacies in statement logic. Hence, contemporary logic can be safely used for refutational purposes only.

Because contemporary logic doesn't really tell you what to do with statements, I will give rules on the permitted and forbidden logical operations based on the meaning and function of the connectives in question. This is very close to Aristotelian logic.

Now then, contemporary logic's criterion for validity above needs some explaining, and in order to do so, I have to give you the truth table of the conditional proposition with its components. I will discuss the conditional proposition later; but for now, its truth table will allow me both to illustrate what a truth table is and show how it works.

The first thing to note is that contemporary logic uses the letters "p" and following to indicate whole propositions in any form; and since we're not now interested in subjects and predicates, as I said, we can do this also. One convention here is that if the same letter appears twice, it stands for the same proposition both times. So "p" means "any old proposition," and "q" means any other proposition you please."

I am not, however, going to use contemporary logic's dots, V's, slashes, and horseshoes and so on that symbolize the connectives, because they make the whole thing terribly confusing to look at; and in a matter like this, unnecessary confusion is something you want to avoid if at all possible. So I will, as above, use the names of the connectives.

Now if you look at the T's and the F's in the first two columns, you will see that they exhaust all the possible combinations of affirmation and denial there are with two propositions. If there were three, there would be eight lines in the truth table; but as I said, we are only interested in the basic ideas, so we will stick with two propositions. The T's and F's in the third column are what the compound proposition turns out to be based on the logical function of the connective and the T's and F's on the corresponding line of the components' columns.(2) Thus, the first line says that when "p" and "q" are both true, the compound proposition "p implies q" is also true. For instance, the compound "If dogs are mammals, then dogs are animals" is true, given that both "dogs are mammals" is true and "dogs are animals" is true.

Don't confuse reading a line of the truth table with an inference, however; the truth table is just what you might call the "logical sense" of the compound proposition: a kind of "truth-definition" of it; it defines its truth-value based on the logical function of the connective, though not its meaning. This obviously has to be the case, since logic defines the meaning of the connective to be nothing but its logical function. We will see more of this distinction as time goes on.

But to return to the truth table of the conditional proposition, the inference above about Chicago would be expressed like this, with "Chicago is in Illinois" being "p" and "I am getting gray" "q."

[(p implies q) and p] implies q

The difference between the inference and this proposition is that "p" as an affirmative proposition is not the affirmation of "p," but simply a "proposal of 'p,'" one that is "proposed for the sake of argument." But it can be in reality false (and can be known to be false. That's what the truth tables are for). Similarly, a negative proposition is not a denial, because it is "proposed as" true and can be denied. Since an inference proceeds by way of affirmations and denials, this is simply the expression of an inference, which can be false in the various ways in which statements can be false, as we saw in Chapter 5 of Section 3 of the third part 3.3.5.

But a statement such as this expresses a valid inference (in contemporary logic) when as a complete statement, it is true all the time, no matter whether the component statements are true or false in themselves. That is, when the connective expressing the main verb (in this case, the "implies" on the right-hand side) is true all the time, no matter what p and q are themselves, then the inference is valid. This is what contemporary logic means by "a tautology."

The way you establish the validity of the inference is this:

First, knowing the truth table for "p implies q," you substitute the last column of that truth table for our the column that represents the parentheses, and at this stage we have

Now we have to look at the truth table for "p and q" (given below under the discussion of "and") to get the next stage; and that gives us (ignore the Ts and Fs in the square brackets for the moment):

Note that you can't read this table from left to right. You have to read first what has no parentheses or brackets around it, then what has parentheses, and lastly what has square brackets, then what is in braces.

since "and" is only true when both components are true. Now we're ready for the last stage, expressed by the Ts and Fs in the square brackets. The column under "and" now is our new "p" and by the truth table for "p implies q" we see that the column for the last "implies" (the letters in square brackets) is all Ts, since it is T when the "p" is true and the "q" is true, and T when "p" is false no matter what "q" is.

Therefore, that inference is, as I said above, a tautology, or is valid, according to the contemporary criterion of validity.

With that out of the way, then, let us go to the first of our connectives, which is called a conjunction, of propositions, and simply asserts the fact that the propositions are connected:(3)

The logical function of "and" is that each of the component propositions is to be affirmed.

The meaning of "and" is that the two facts affirmed are connected somehow; but it does not specify what the connection is.

In the logic of statements, this is trivial. It is obvious from the logical function that if the compound proposition with "and" is to be affirmed (and why would you state it as a fact if you weren't affirming it?), then the only thing you can do is affirm each part. You can't deny either one, and the affirmation of one doesn't imply the affirmation of the other (the affirmation of the compound simply affirms both already).

The reason is that you can't affirm it unless you already know the truth of both parts, and so you would already explicitly know the "conclusion" before you drew it. So it isn't reasoning to say, "John is tall and John is strong; and John is tall; therefore, John is strong." The second "premise" is a waste of time, and the conclusion doesn't follow, because it was already stated in the compound proposition itself. And if you say, "John is tall and John is strong and John is not tall," you've already contradicted yourself explicitly.

In contemporary logic, this is how the truth table for "and" looks:

Now as I said with the conditional proposition above, this is just an assigning of truth values to "p and q" based on the logical function of the connective, and it is not an inference.

You can, however, make inferences based on it, even though when you see them translated back from symbols into sentences, they look silly. We saw one of them above, ("John is tall and John is strong, and John is tall implies John is strong.") which had the form [(p and q) and p] implies q. When all you see is the letters, this looks like an inference.

The truth table check on the proposition looks like what is below. Here, to save space, I have introduced a convention. Again, what is not enclosed in any kind of bracket is the initial stage of "p" and "q"; the result of the second stage is in parentheses; the result of the third in square brackets, and that of the fourth in braces (if there were more steps, they would be in double parentheses, double brackets, and so on).

Since again the last step is all T's, the inference is once again valid. But as I said, no one would ever have any occasion to perform such an "inference."

If, however, we try to reason the other way, from the components to the compound, this is what we get, indicating the first compound by what is in the parentheses, and the final step by what is in the brackets:

And here we run into the heart of my difficulty. For instance, why is the statement, "George Bush is in the White House and my feet hurt" funny? Because it connects two statements as if the facts were connected; and the humorist expects people to recognize that the facts are not in fact connected.

First note, however, that that's not quite what is being said in the conditional proposition above. That proposition actually says "If (George Bush is in the White House) and (my feet hurt), then George Bush is in the White House and my feet hurt." Well of course. But in making this into one sentence, you have to make the conjunction in the antecedent (the "if" clause), from which the consequent (the "then" clause) trivially follows. In other words, you have turned the inference into the form "p implies p."

But that isn't what the statements say. It is invalid to argue from a true statement and any other true statement to a conjunction of the two statements, because they might be conjoined or they might be totally unconnected. Putting the inference into a conditional proposition in contemporary logic can't spot that fallacy--and, indeed in contemporary formal logic, it isn't a fallacy, and the argument is sound, which means that the conclusion is true. Of course, it's true that any two true statements always can be conjoined (if under no other guise than that they're both examples of true statements); but the conjunction may or may not express and actual connection of some sort among the facts, or the statement about the President wouldn't be funny. Consequently, it does not follow in the logic of statements that the conjunction must be true when both of the components are true.

Contemporary logic's "and" is a weak "and," which does not say that both components (together) express something that is true, but merely that each is true. That is, it does not say that the two propositions are connected.

The question I raised above now arises of whether there could ever be a reason for using the connective in the sense defined by contemporary logic.

I don't see how there could be, because of the fact that the two propositions are connected into one sentence (one compound proposition); and it is bound to be misleading to connect two things which explicitly are not to be taken as connected.

Now you can say if you want, "When I connect things, they might be connected and they might not, and so you aren't to understand them as connected." My answer would be, "If you don't want me to understand them as connected, don't connect them." Instead of saying "p and q" as one proposition, state two distinct propositions: "p" period. "q" period. That leaves it open as to whether the two are connected or not, since it precisely says nothing about it. It's certainly possible to do this rather than redefine "and" to be something that no one else would ever use.

In other words, the very act of connecting the two propositions into one contradicts the "definition" in contemporary logic of the connective as something-that-does-not-express-a-connection. You can say, of course, that no facts are connected, and so "and" can't mean anything but the convenience of getting the propositions in convenient shape to be worked on; but that's the epistemological stance I think is simply silly, or statements like the one about the President wouldn't be funny. Such statements recognize that some statements express connections between the facts represented and some statements don't. Hence, "and" means something.

Logicians preen themselves on being unambiguous and on saying no more and just precisely what they mean. But I don't personally see how you can avoid ambiguity when you connect things that may not be connected. Better to reserve "and" for propositions that express what is somehow connected; then you leave no ambiguity in what you are doing. The logicians would object that they want all the propositions in an inference to be expressed as one single proposition. Very well, then the ambiguity can't be escaped; but don't claim that you're being unambiguous.

So much for my first argument against the validity of contemporary logic as a system. If we now look at the next connection, said to be a a form of "or," but which I call "is incompatible with" in its clearest formulation, my problem with contemporary logic will be a little clearer.

The logical function of "is incompatible with" is that at least one of the components must be denied.

The meaning of "is incompatible with" is that the facts stated in the components are incompatible with one another.

This connective is not logically trivial, because all you know by affirming the compound is that one or the other, and possibly both of the components must be denied (is false), but you don't know which one. And as a statement it is not trivial either, because what you are asserting by the compound is the fact of incompatibility between the components, not necessarily any knowledge of the factuality of either of them.

Generally in common speech, this connection is stated negatively; either as an impossibility, as in "you can't have your cake and eat it," or more often of the form "not p when q" as in "The cat is not outside when it is raining." Or possibly the statements are given as gerundives connected with "is incompatible with" as in "The cat's being outside is incompatible with its being rainy." Note that the second proposition looks as if its first part is a denial; but the "when" shows that the denial belongs to the whole statement. It means "It is not the case that the cat is outside when it is raining," or "It is not, as a general rule, (the force of the "when," as we will see below) simultaneously true that the cat is outside and it is raining."

Here what I am going to do is say what I think is wrong with contemporary logic's approach to the proposition, and then afterwards list the valid inferences that can be made from it. Once again, I think that contemporary logic's ignoring of the meaning of the connective allows it to make "valid" inferences that are fallacies when taken as statements. Let us look at the truth table:

which is just the opposite of "and," you'll notice; and in fact, it is the logical equivalent of "not (p and q)." And here is the problem. The statement, as I will try to show, is not merely the denial of a conjunction.

Observe that, if you affirm both components, you necessarily have to deny the connection and so there is a legitimate inference this way. For example, if you say, "The cat is not outside when it's raining," you can prove the connection inappropriate by showing an instance when the cat is outside and it is raining (i.e. by affirming both). So this inference works in both contemporary and statement logic.

But the difficulty with contemporary logic, as I said, is not in its refutational use, but in its use in an affirmative sense; and the reason is below:

The meaning of "is incompatible with" in contemporary logic is a weak "is incompatible with," which simply denies that both components are true, but says nothing about whether they are incompatible with each other, but simply that one or the other or each is false.

Now it might seem that I've loaded the dice here, because what I call "is incompatible with," contemporary logic (when it uses this connective at all, which is very seldom) simply calls it "not both." But what I am going to try to show is that to take the compound in the sense of "The two don't happen simultaneously to be true" produces a statement that there would be no reason for making.(4)

To begin, then, the meaning of the proposition as contemporary logic would have it could not be expressed as "The cat is not in the house when it's raining", because the "when"makes it a general statement (i.e. of what is always the case), and so rules out the statement as merely a simple statement of what is going on now. As a simple statement of what's going on at present (a negative conjunction), it would be stated, "It isn't simultaneously true at the moment that the cat is outside and it's raining." Here, all you would be intending to state is just that the two happen not to be the case.

But could you make a statement "It is not simultaneously true that p and q" as a simple statement of fact, totally unconnected with any general rule? It would be difficult to imagine an occasion for it. First of all, in this case, how could you know whether it was true as a whole or not without knowing anything of the truth or falseness of at least one of the components? That is, how could you possibly assert as now the case that it's not simultaneously true that the cat is outside and it's raining without knowing whether either of these were true or not? So you can't make it without knowing something about the components.

Secondly, if you know one of them is true, you still can't assert that the conjunction is false (even contemporary logic says this), because, for all you know, the other one might be true, making your proposition false. That is, if you know that the cat is outside and that's all, you don't know that its false that the cat is outside and it's raining--unless, of course you knew the general rule that the two are incompatible. But we're not talking now about incompatibility, but simple statements of fact.

Thirdly, if you already knew that one component was false, why would you conjoin the opposite of this false statement with any other proposition? That is, if all you know is that it's not raining, why would you then say, "It isn't simultaneously true that it's raining and Lincoln is in the White House.")? Here again we have the problem of connecting two propositions into a single compound proposition with absolutely reason to connect them. The only "grounds" you could give is that you happen to know that the opposite of one of the components is true; but those "grounds" are exactly as good for connecting, "It's not simultaneously true at the moment that there's life on Mars and the temperature in Miami is ninety degrees."

Fourthly, if you know that one is true and the other false, you would be giving misleading information. The reason is that if you're trying to convey to someone what is the case, and you know that the cat is outside and it's not raining, then to say "It's not simultaneously true that the cat's outside and it's raining," conveys the information that both might be false, which is impossible as a statement of the present situation, because one is true.

Finally, if you know that both are false, and you want to tell someone what the present state of affairs is, you would say, "The cat is not outside and it's not raining," denying each of them, not denying the conjunction, because that also conveys that one of them might be true, when in fact, neither of them can be true because they're both false.

So I submit, therefore, that it is unreasonable to make an "is incompatible with" statement of fact as a mere statement of what is at present the case. If you have no information at all, then you can't make the statement; if you know the truth of one, you can't do it either, because it doesn't follow; if you know the falseness of one, then the statement you make has no connection with the information you have; if you know the falseness of one and the truth of the other, you're conveying the false information that both might be false; and if you know that both are false, you're conveying the false information that one might be true.

This is not to say that the statement might not sometimes be true, as in the case of knowing that one is false and the other true; but in that case it is unjustified, which means it is made capriciously. It is also possible that the context could be peculiar enough so that the misleading information in the last two instances would be removed (as, for instance if you actually gave the information you knew first); but in that case, it would be superfluous, because you would previously have given more information than you give by the statement.

Let me just illustrate this last case. You could say, "The cat is not outside and it's raining out, and it's not simultaneously true at the moment that the cat is outside and it's raining out." But why would you ever say a thing like that? Even if you said, "The cat is not outside and it's raining out, and so (i.e. implies) it's false at the moment that the cat is outside and it's raining out," that's just as bad. In neither of these two cases have you conveyed any more information by the second part of the statement. Everyone would recognize that what you said was true, but making the statement would be completely redundant. So either such a statement is redundantly repetitious pleonastic superfluity, or it is misleading.

A word on ambiguity. Logicians, as I mentioned, like to think that their meaning of the connectives avoids ambiguity. But the "is incompatible with" proposition is precisely ambiguous. That is, it leaves open three possibilities: "p" is false, "q" is false, and both are false, but does not distinguish among them. Now to leave open three interpretations of the proposition without picking out one is to leave the proposition ambiguous in its truth value. Granted, the connective is precise in its logical function, because these are the possibilities and there are no more and no fewer; but it isn't speaking precisely (or unambiguously for that matter) to confuse precision with unambiguity.

So much for that. Now could the "is incompatible with" (in the weak sense of "not in fact both") statement be made as a statement of what has frequently been the case, without implying any incompatibility between the two statements and without grossly misleading your hearer? (Contemporary logicians tend to say, remember, that their way of speaking is the way we "ought" to speak to make ourselves clear.)

That is, can "The cat is not outside when it's raining" convey, "I've never so far seen the cat outside when it's been raining (but it might happen tomorrow)." In that case, the use of the present tense is what is misleading. If all you are trying to convey is something that so far has invariably happened and not that there are grounds for predicting anything from this, then the present perfect tense must be used. The present tense is used only for present states of affairs or general statements that occur irrespective of time (and so would also occur, presumably, in the future).

For example, the simple denial as having happened invariably but without predictive implications might be spoken by, say, the President's wife: "I've never been in this room when George Bush has been in it." Clearly, if she said this, she would not be intending to convey any hint of what might happen five minutes from now. But if she said, "I'm not in this room when George Bush is in it," this would be a general statement, and so it would be taken as having a predictive value also, as implying that for some reason she is not permitted in the room when the President is there (or that she refuses to be in it when he's there). So to say, that "is incompatible with" means that "it is not the case that p and q" cannot be taken to mean, "It has so far not been the case that p and q."

Of course, if you want to adopt Hume's criterion of causality, then of course you could say that no general proposition (except a tautology) has any predictive force, and all are simply summations of what has been observed so far. But I fail to see how you could make such a general proposition, because it's a general proposition which is not a tautology. In that case, if it is true, it would only apply to the ones you have seen so far, and would have no bearing on any other one. And again, your statement of it would be misleading, because all you meant to say was, "So far, I haven't run across any non-tautological general proposition that has any predictive force, but the next one might be one," like Mrs. Bush's past-tense statement above. But then why say no general statement allows you to know anything beyond what is observed? You certainly mislead people into thinking that this general statement applies beyond the ones you've seen.

So to say that the "is incompatible with" statement merely means that so far the two parts have not happened to be conjoined is to convey by the use of the present tense that they are incompatible, when you don't want to assert that.(5)

Hence, the only way you can be clear in what you are trying to say with an "is incompatible with" statement is that it asserts what you think is the fact of incompatibility between the two components. You are not asserting the grounds you have for this, but merely what you consider the fact. Hence, you may know that your cat hates to get wet, and so you say, "The cat is not outside when it's raining." All your hearer knows is that you are asserting that it is impossible for both components to be true--not that you are asserting that they are not both in fact true at the moment.

Where are we, then?

Conclusion 2: The weak "is incompatible with" statement of contemporary logic has for practical purposes no occasion to be made as a statement.

Let me, then, give the rules for the "is incompatible with" compound, given that it expresses the incompatibility of the components:

Rules for "is incompatible with"

Inferential mode:

1. If the compound and one of the components is affirmed it follows that the other must be denied.

2. If the compound and one of the components is denied, no conclusion follows.

Refutational mode

3. If both of the components are affirmed, the compound must be denied. This refutes the connection.

4. If one of the components is denied, nothing follows with respect to the compound.

The next connective is the contrary of "is incompatible with," and is sometimes called the "inclusive or"; it is usually stated "and/or" in informal speech; essentially, it is "not neither."

The logical function of "and/or" is that at least one of the component propositions must be affirmed.

The meaning of "and/or" is that the possibilities referred to are connected in such a way that one of them is in fact realized, though which is realized is not expressed by the statement.

This will obviously take a little clarifying. First, in ordinary use of this connective, the compound statement can also be stated "One or the other or both," to distinguish it from the disjunction, which we will see after this. The word "or" in English is ambiguous, since it can mean "one or the other" or "one or the other or both," and so clear speakers and writers use "Either...or" and "and/or" (or "(Either)...or...or both") when there is a danger that the context will not distinguish the two.

For instance, a person might say, in reference to some scandal "Either there's something wrong with the corporate structure, or management is corrupt, or both," or "that cat is clever or lucky or both."

What this connection actually asserts as a fact is the necessity of at least one of the components, usually because they are assumed to be an exhaustive list of the explanations of some affected object (which, if you will recall from Chapter 1 of Section 2 of the first part 1.2.1, is a contradiction by itself, but which as concrete can have a complicated causer). Explanations do not necessarily exclude each other (as, for example, the scandal in the corporation might be partly due to a faulty corporate structure and partly to corrupt management); but there has to be at least one; and if you list them all then they can't all be eliminated.

The proposition is refuted by denying both components, because of the fact that the components listed (which may be more than two, of course) is asserted to be exhaustive. But like the compounds we have seen already, it is not confirmed by affirming one of the components, or even both of them, because there might be another item to the list not taken into account. For instance, the clever and/or lucky cat above, in order to escape the dangers that occasioned the remark, might be being watched over by its owner, in which case it might be true that it's neither clever not lucky, but just loved. Or, of course, it could be all three. So, even though the cat's being clever is consistent with "That cat is either clever or lucky or both," it doesn't prove that the statement has to be true.

I suppose I should point out here that in contemporary logic, there is a valid inference from "p" to "p and/or q," which suffers from the informal flaw that something appears in the conclusion which was not in the premise. Formally speaking, the argument "The cat is clever. Therefore, the cat is either clever or lucky or both" is sound if the cat is clever (because if it's clever, obviously it's clever, making the "and/or" proposition true by default). But the compound proposition as a compound is then irrelevant to the argument, and so even in contemporary logic, it doesn't belong there, but by the informal fallacy of irrelevance.

Here is the truth table for this compound:

And once again, this says that what contemporary logic means by "and/or" is not and/or, but a denial that both are false, which can be a mere statement of fact. That is, if "p" is true, it is obviously false that both "p" and "q" are false; and that is what the "inference" above has to mean.

The "and/or" of contemporary logic is a weak "and/or" which simply means that one proposition is true, and says nothing about whether one has to be true or not.

We must again discuss whether we can ever sensibly make such an "and/or" as a statement of fact. Clearly, with no information about either component, it can't be asserted. If all that is known is that one component is false, it doesn't follow that the compound is true, because the other proposition could be false, making the compound possibly false. If all that is known is that one proposition is true, then this does not constitute sufficient grounds for connecting it with the other proposition, because any other proposition, true or false, would on these grounds fit just as well. Why would you, knowing that George Bush is President, convey information to someone by saying, "George Bush is President, and/or there is life on Mars"? It's true, but you have no reason for saying it. If you know that one is true and the other false, you are misleading your hearer into thinking that both might be true when they can't be--as a mere statement of fact, because what is in fact false can't be true if it's false. And similarly, if you know that both are true you would be conveying the false information that one might be false when it can't be.

This also is subject to the same sort of qualifications as with the "is incompatible with" statement as a mere statement of fact. It could be said, and it would not be false, but it would be either misleading or capricious to say it. And with that, we can draw the following conclusion.

Conclusion 3: There is for practical purposes no occasion where contemporary logic's "and/or" could be uttered as a mere statement of fact.

But if "and/or" means that at least one component must be true, then it would be self-contradictory if the list of possibilities wasn't exhaustive, because you would then be asserting that one must be true when both could be in fact false.

Here again we have a logical aspect of statements that is not covered by contemporary logic, which does not recognize "That's not all the alternatives" as a denial--as in fact it is in every case of the use of the "and/or" proposition. But of course, that refutation doesn't involve anything within logic, which is what contemporary logic wanted to avoid. But in that case, what it should have said is that no conclusion can be drawn from knowing the truth of one component, not construct the logical system in such a way that the conclusion is valid by making up this "weak" sense of "and/or" which never has been used and never will be. Why not rule out the "formally valid but not necessarily always the case" with its ambiguous use of "true," by stating a rule that the statement is meaningless as a mere statement of fact and is to be used when there is an exhaustive list of possibilities? Then the logical function would be allowed to do its work properly.

And this is precisely what the rules below do. Recognizing that "and/or" as used as a statement implies the necessity of one component's being true, here are the logical things you can do with it:

Rules for "and/or"

Inferential mode:

1. If the compound and one of the components are affirmed, no conclusion follows.

2. If the compound is affirmed and one of the components is denied, it follows that the other must be affirmed.

Refutational mode:

3. If one or both components are affirmed, nothing follows with respect to the compound.

4. If both of the components are denied, the compound must be denied. This refutes the connection.

The next connection, "either/or," is given the name "the exclusive 'or'" in contemporary logic; and in Aristotelian logic, the inference made from it is called the "disjunctive syllogism," because it is a more common way of reasoning than either of the two we have discussed. Actually, the commonest fallacy dealing with both "is incompatible with" and "and/or" is that (since they can be stated using simply "or") they are apt to be confused with this one (while in fact, "either/or" is another way of saying "not both and not neither").

The logical function of "either/or" is that one of the components must be affirmed and the other one denied.

The meaning of "either/or" is that the two facts referred to contradict each other.

Here is the truth table for the proposition in contemporary logic:

Once again (and I will again leave you to take my word for it or do it out yourself) we have a case of the fact that you can conclude to a denial of the compound (and so a refutation of the connection) by either affirming both or denying both of the components.

But you can't confirm the fact of the compound by affirming one component and denying the other (because the actual fact might be either a not-both or a not-neither compound, both of which are compatible with one component's being true and the other false). Thus, for example, "Either you're in New York or you're in Chicago" can't be established by saying that you are in fact in New York and not in Chicago--because clearly the proposition is actually an "is incompatible with" proposition that's disguised by the use of the wrong connective (i.e. as stated, it would be refuted if you were in Cincinnati).

Note, however, that doing out the truth table check will show you that the proposition "p and not q implies either p or q" is a valid inference in contemporary logic. Here there is not, as in "and/or," a rule of irrelevance to eliminate this fallacy. In contemporary logic, it is valid, and if the premises are true, sound.

Either/or as used in contemporary logic is a weak "either/or" which simply asserts that one component is true and the other false without saying anything about whether this has to be the case or not.

There is again something funny going on, which is masked in the example, because being in New York and Chicago are incompatible. But if we take two propositions that are compatible but don't have to be simultaneously the case, we can see how strange the meaning of the connective in contemporary logic is. It would be odd to say, "You are healthy and you are not six feet tall" implies that you are either healthy or six feet tall, because "you are either healthy or six feet tall" (the "exclusive" use of "or") seems to be saying that you can't be both. Well of course you can't be both if you're not in fact six feet tall; but to offer this is to use "can't" in two senses. One means "being healthy is in itself incompatible with being six feet tall," while the other means "both are not the case, and so at the moment they happen to be incompatible." But if you want to say simply "One and not the other" why not say that instead of saying "Either one or the other"? That is, why open up the ambiguity by using "Either/or" when "p and not q" would be clear?

Here again, logic is not being unambiguous. The "exclusive or" in contemporary logic does not exclude either "is incompatible with" or "and/or" in two out of the four cases in its truth table. Hence, the following proposition expresses a valid inference:

p and not q implies (p is incompatible with q) and (p and/or q) and (either p or q).

If we fill in the p's and q's, we get, "The fact that you're healthy and not six feet tall implies simultaneously that you're healthy when you're not six feet tall, you're healthy and/or six feet tall, and you're either healthy or six feet tall." That sounds really peculiar, but you can make sense out of it if you say, "If you're healthy and you're not six feet tall, then you're not both healthy and six feet tall, you're not neither healthy nor six feet tall, and you're one or the other."

But of course, when you put it this way, it is trivial as a statement. If you put it the way it was originally stated, then even ignoring the impression someone might get from the first conclusion that you're trying to say that you're healthy only when you're six feet tall (i.e. that your being healthy is incompatible with your being six feet tall), why would you say "you're healthy and/or six feet tall, and you're either healthy or six feet tall"?

That is, "and/or" includes the possibility of both, while "either/or" excludes that possibility. And here is where the two senses of "can't" mentioned above come in. In contemporary logic, "and/or" is true in this proposition because in itself it is possible for both to be true, but "either/or" is true, because as it happens they can't both be true because one is in fact false. And then there's the fact that "and" in logic doesn't mean "and."

So you have to be very, very careful if you're going to apply contemporary logic to statements. They don't mean what they seem to mean.

Warning: When contemporary logic talks about something being "impossible," this can mean simply that it is not the case.

This is the celebrated unambiguity of contemporary logic?

Further, as this example shows, to call "either/or" the exclusive "or" is to give the impression that it is incompatible with "is incompatible with" (in which both can be true) and also with "and/or," (which, after all, is called the "inclusive or"). But it isn't; it's inclusive of both of them in the sense in which the intersection of a set is inclusive of the two sets that intersect in it (i.e. it is the set that includes part of each of them).

And once again the problem is in the fact that contemporary logic does not recognize the additional information in the meaning of the connective. "Either/or" is in fact used only when the two propositions in question contradict each other, or only when you have grounds for saying that one has to be true and the other false, not merely when one happens to be true and the other happens to be false. If you say, "Well, it can be used simply as meaning "one and not the other," then I say that contemporary logic's goal to be clearer than ordinary speech has been violated. If you're stating it as a mere fact and not based on some internal contradiction, then you have to know which one is true and which one is false, and so why do you make the ambiguous statement, "One or the other of these is true and the other one is false" instead of telling what you know? Or why not use "and/or," if you want to leave open the possibility that both can be true?

I'll tell you this much. There are plenty of contemporary logicians who are not clear about the distinction I have been making, and who think that it's English that conveys inexact information and logic that always means exactly what it says. If it does, it certainly doesn't convey to the unsuspecting what it's saying.

Let's face it; in any rational system of logic, it is improper to use either/or when both can be true, even though one in fact happens to be true; that is "and/or," not "either/or," no matter how much you may be able to justify it using that etiolated sense of "can't." And the same goes for using "either/or" as in the New York and Chicago statement above, where the proper connective is "is incompatible with."

From this it follows that reasoning to "either/or" from "p and not q" ought to be forbidden. And that, in fact, is what traditional logic has done for millennia. If it is a convention to forbid it, then that convention is closer to the way statements are made (in Greek, Latin, English, Chinese, and Swahili) than contemporary logic's convention of taking "either/or" to mean no more than "one and not the other."

Then let me list the rules for how the disjunctive syllogism actually works:

Rules for the disjunctive syllogism

Inferential mode

1. If the compound and one of the components is affirmed, it follows that the other must be denied.

2. If the compound is affirmed and one of the components is denied, it follows that the other must be affirmed.

Refutational mode

3. If both of the components are affirmed, it it follows that the compound must be denied. This refutes the connection.

4. If both of the components are denied, it follows that the compound must be denied. This refutes the connection.

5. If one component is affirmed and the other denied, nothing follows with respect to the compound.

The final basic way logic combines propositions is the one we saw proleptically, called the "implication," or the "conditional proposition." or in traditional Aristotelian logic, the "hypothetical syllogism" (from hypo-thesis, a "putting under" or "supposition," because "q" "supposes" "p" "underneath" its intelligibility somehow, as an effect "supposes" its cause. It is stated either "if p then q" or "p implies q"; it is the general form of the inference, according to contemporary logicians, although it is not as they use it itself an inference because in order to make an inference you have to affirm or deny the p's and q's; but as the form of the inference, it's supposed to reflect the kind of thing you're doing when you make an inference.

Well it doesn't, as I think has been made clear. But before I discuss it further, let me state the function and meaning of the compound:

The logical function of "if then" is that an affirmation of the antecedent (the contents of the "if" clause) demands an affirmation of the consequent (the contents of the "then" clause), and a denial of the consequent demands a denial of the antecedent.

The meaning of "if then" is that the consequent depends somehow on the antecedent.

It is here that contemporary logic really takes flight into never-never land. Since contemporary logic wants to have nothing to do with things like question marks and wants its truth tables filled with either Ts or Fs, then this proposition elevates the non sequitur into a legitimate implication.

I already gave the truth table for this type of proposition when I introduced this section on compounding propositions, so I refer you back there if you want to look at it.

Now it is true that, since formal logic deals with the form under which statements go together, the contents of the statements don't enter into it. There was no problem with this in Aristotelian logic, because it always used the inferential mode of reasoning ("sophistical refutations" was an area that wasn't strictly formal); and in the inferential mode you always first affirm the compound and then argue to the components. Hence, if the compound is some non sequitur like, "If I schedule an outdoor party, then it rains," it doesn't matter; because, assuming that compound statement to be true, then you can make it rain by scheduling a party, or you can guarantee that there'll be no party by noticing that the day is sunny.

But in the real world, if you're going to argue from the truth of the components to the truth of the compound, then the meaning of the connective can't be ignored; because the truth of the compound implication as a factual statement depends on whether the consequent in fact depends on the antecedent. That is, what you would be saying with respect to the compound above is that scheduling a party and having it rain proves that scheduling a party implies having it rain, which of course is ridiculous. That's the first problem, and it's the same one as we had with other compounds as treated by contemporary logic.

The implication in contemporary logic's conditional proposition is a weak implication (called "material implication,") because all it means is that it is false that simultaneously the antecedent is true and the consequent false, and says nothing about whether the consequent follows from the antecedent.

There has been a firestorm about this particular deviation of contemporary logic from the way we use statements, because it is obvious that "implies" as a word means more than a denial of "p and not q."

I said that it is a valid inference in contemporary logic from "p and not q" to "p is incompatible with q, p and/or q, and either p or q." But now (and you can work this out on the truth tables if you want), it is a valid inference from "not p and q" to "p is incompatible with q, p and/or q, either p or q, and p implies q."

Come again? It's a valid inference from "not p and q" to "p implies q"? Knowing that the Cincinnati Reds are losing their game today and that it's raining out proves that if the Reds are winning, then it's raining? (Yes. It also proves, by the way, that if the Reds are winning, then it's not raining.)

I think you can see why this has caused something of a problem.

Warning: Even though material implication uses the word "implies," it is compatible with the absolute independence of the two components from one another.

The only requirement for saying that "p implies q" is that the combination of "p's" happening to be true and "q's" happening to be false is forbidden. Why? Because the inventors of this logic decreed that it is, not because of anything about "p" and "q," certainly, and not because the connective "implies" means this. Everyone who has ever struggled with contemporary logic has to spend a great deal of time erasing any meaning to "implies" that has anything to do with a sequence or a dependence or anything else; and of course, once they do this, they think they've finally "mastered" something very difficult, and they fight tooth and nail for how "powerful" material implication is and how much more "accurate" it is than the messy way we talk, and all the rest of it.

Now "formal implication" (that is, "implication" in the sane sense of the word, expressing dependence) also makes it impossible for "p" to be true and "q" false, so some of the invalid inferences in "formal implication" are also invalid in material implication, and all of the invalid inferences in material implication are also invalid in formal implication. But there certainly are valid inferences using material implication that are invalid by any rational standard of when something follows. For instance, let us try to unpack the inference above about the Reds, to see if head or tail can be made of it in any sense. First of all, I can eliminate some of the confusion by saying that the conclusion also follows from the mere fact of "p's" being false (you don't also have to know that "q" is true).(6)

So in this slightly easier form, what it says is "It's always true to say that if it's false that the Reds are winning, then it's true that if the Reds are winning, then it's raining."

But that's still a little difficult; let's take a simpler example: If I am here, then I am at home. Now when it's false that I am here, it is legitimate in contemporary logic to infer that if I am here I am at home--no matter where I actually happen to be. (To avoid quibbles, I mean by "here" a certain address I could give you.) Granted, it happens to be true that here is my home, so if I am here, I am at home. But it certainly doesn't follow that if I am in Chicago, then if I am here I am at home (that is, that my being in Chicago establishes where my home is).

Now then, the first thing to remember is that this is a negative statement in contemporary logic, not an affirmative one. So "If I am here, I am at home," is actually the statement, "'I am here and not at home' is false"; this is what is said to "be implied" by my not being here. Now what that says is not an implication, but another negative statement; so the whole thing becomes "'"I am not here" and "I am here and not at home is false" is false' is false." The last "is false" is the denial which is the basic "implication." The next-to last is because is the denial of the "q" part of the basic implication (it is, remember, not (p and not q). The third from last is the denial which forms the embedded "implication," and the "not at home" is of course the denial of the "q" in that embedded implication.

But two of these last three "is false's" cancel each other out, so simplifying, we get (putting, for clarity, the last "is false" first now), "It is false to say that "I am not here and I am here and not at home."

Well of course that's false, because it's false to say that I'm not here and I'm here, wherever my home is. So this proposition is going to be false because hidden in it is "not p and p," not because anything depends on anything else.

Warning: "Implications" which use material implication are really just negative propositions.

You know, I don't think much of a logical system that says it is "making an argument" when in fact it is making a denial. If this is what these people mean by "speaking precisely," and "saying exactly what you mean," I would hate to hear them speak imprecisely and say what they mean inexactly.

The justification for taking "p implies q" to mean "not (p and not q)" rather than "q depends on p," (which only implies "not (p and not q)") is that there are allegedly ambiguous uses of "p implies q" in ordinary speech, where sometimes causal dependence is meant, sometimes rule-following dependence (as in logic), sometimes mere sequential dependence (as in winter's implying spring to follow), and so on; and then there's the statement, "If you win this bet, I'll eat my hat," where something absurd is made to "depend on" what the speaker considers a false statement. Obviously, that can't be dependence, these people say; and so the "common core of meaning" is the negative proposition above.

But in the last case, that statement is a premise of an enthymeme, which the speaker thinks the hearer is too intelligent for the speaker to have to flesh it out. For the benefit of our logicians, let us do so. What is conveys to anyone with any sense is, "If you win this bet, I'll eat my hat, and I'm not going to eat my hat; and so it's impossible for you to win this bet." Knowing that a false consequent refutes the antecedent, a proposition which someone wants emphatically to deny is made to imply something known to be false. It is a rhetorical device, and the first proposition was not "proposed" as false, but as problematic until the false "conclusion" was stated. So it isn't that "anything follows from a false statement"; it is the perfectly legitimate reasoning that anything that implies a false statement has to be false. So that usage does not by any means mean that in ordinary discourse we ever use "implies" without the notion of some kind of dependence.

Now even if every case of "p implies q" makes "not (p and not q)" true, it is a far cry from that to say that "p implies q" means (or even "really means" or "ought to mean, if you want to be accurate") "not (p and not q)." This is about as intelligent as saying that since every human being can talk, then what is "really meant" by "human being" is "talker." Humans can do a lot besides talk.

Then why did the logicians get into that convoluted way of reasoning by turning the process into negative propositions that have negative propositions embedded inside them? The basic reason was that they didn't want to have compound propositions with anything but Ts and Fs in the truth tables, and in actual implications, you'd have to put a question mark in a couple of places.

But if you take this ploy, then to follow what is going on in the simplest inference then becomes an enormously tedious task involving negations of negative negations, trying to keep them straight as above.

Now of course, you don't have to follow an argument using contemporary logic. You can simply state it and then set up your computer to do the truth tables and wait until it spits out the truth table for the final result. Then you know that your conclusion is valid or not by the standards of contemporary logic; but of course, if it turns out to be valid, you still don't know whether it's only "formally" valid and not--shall we say?--existentially so without going back over it and trying to spot the "informal" fallacies.

Note that there is nothing that is invalid in contemporary logic that is not invalid in traditional Aristotelian logic, and so nothing is gained by using contemporary logic to spot fallacies. But there are conclusions that are valid in contemporary logic that are simply nonsense as statements of fact.

I will now consider that I have proved that contemporary logic is a waste of time; and this applies both to "propositional logic" (where you don't care what the proposition looks like but just combine whole propositions) and "predicate logic," (where you care about subjects and predicates), because the definite proposition in contemporary "predicate logic" has the form of the implication and so is infected with the disease of material implication.(7)

But why did logicians develop this system? It was partly to get out of a Hegelian idealism, whose logic, you will remember, was the logic of contradictions, and was also "metaphysics" in his sense of the term. That was the main incentive for purging logic of all Hegel stuffed into it.

But it was also true that the inventors of the system (particularly Boole) were mathematicians, and, like mathematicians, they wanted to develop a system of logic that was "closed and complete." A system is closed if all the conclusions from premises within the system are still inside the system, and it is complete if every operation on meaningful statements within the system results in a meaningful statement. Thus, addition is closed and complete on the set of the natural numbers, because any number added to any other number yields a natural number (natural numbers are {1,2,3,...}). Subtraction is not closed on the set of the natural numbers, because 2 - 3 gives a result (-1) which is not a natural number. Division is closed over the rational numbers (the integers {...-2,-1,0,1,2...} plus all the fractions) but division is not closed over the rational numbers, because division by zero doesn't yield any result at all.

You'll notice that mathematics seeks closure by simply inventing numbers that fit; and we will see in the next section why this is legitimate in mathematics. But it doesn't follow from the fact that mathematics can do this that it is legitimate to do this in logic, if logic is supposed to either reflect or apply to the way we reason--and if it doesn't, it's mathematics, not logic. Note, however, that mathematics uses logic (in the ordinary sense), and so presupposes it and is not the same as it. That's one difficulty in trying to make a "mathematical logic." You are taking something that is a particular example of a logical system and trying to use it as a model for the system it is only one particular example of. It wouldn't be surprising if a mathematical logic would work in some cases (those in which the reasoning was similar to what is done in mathematics), but not in others (those cases of logical reasoning that are not in mathematics).

But the real problem in taking mathematics as a model is trying to make logic complete, so that every logical operation on something that is true or false results in something that is true or false. Propositions themselves can be true or false and nothing else; but conclusions can be true, false, or problematic, because inference deals with the necessary truth or falseness of the conclusion based on the truth or falseness of the premises and the type of reasoning involved. Hence, logic as we actually reason necessarily will be incomplete, because, while the conclusion as a proposition has to be either true or false, you can't generate one or the other always from the truth or falsity of the premises.

And if you try to make it complete by simply filling in the truth tables à la mathematics, arbitrarily declaring as "True" certain things which should have question marks, then you get something which does not have an application outside itself, and which (as in the case with the definite and indefinite propositions) has some glaring inconsistencies within itself.

It was a noble effort, but it was doomed to failure, because to model logic on mathematics is a classic case of "reasoning from the particular to the universal," (or from the indefinite to the definite). Of course, there are a lot of contemporary logicians who aren't going to swallow this, because they have Ph. D.s in the field of logic, and have studied it for years and years, and can perform operations using symbolic logic that would make your head spin. And it looks so mathematical! And as everybody since Descartes knows, mathematics is the source of all knowledge and truth.

But let's face it. Contemporary logic is to logic what astrology is to astronomy. You can spend dozens of years studying astrology and can draw charts and all that sort of thing by a very complicated and intricate system that is very difficult to learn; but when all is said and done, astrology rests on the foundation that the earth is at rest in the center of the universe and that the spheres of heaven moving around it cause all the changes on it--and this happens to be false.(8)

By the same token, no matter how complex modern symbolic logic may be, it depends on a radically false epistemology (which is no less strong because it wants to avoid epistemology and thinks you can--which is an epistemological stance in itself) plus the false notion that logic can be modeled after mathematics.

The fact that contemporary logic works well so often is simply a reflection of the fact that mathematical reasoning is a very large subset of logical reasoning; and it isn't surprising that those with a mathematical turn of mind (and who but a person with a mathematical turn of mind would attempt to get into the field of contemporary logic?) would not notice the cases where the applications of their logic were absurd. Hence, they would have no reason to suspect the unsoundness of the logical system itself.

So much for my attack on contemporary logic. I have shown (a) that it doesn't work as applied to statements, (b) why it doesn't work, (c) why it was developed, and (d) why those who are in the field would think that it was a good theory.

To return, then, to the implication as we actually use it, here are the rules:

Rules for the hypothetical syllogism

Inferential mode

1. If the compound and the antecedent are affirmed it follows that the consequent must be affirmed. This valid process is called modus ponens.

2. If the compound is affirmed and the antecedent is denied nothing follows with respect to the consequent.

3. If the compound is affirmed and the consequent is affirmed, nothing follows with respect to the antecedent.

4. If the compound is affirmed and the consequent is denied, it follows that the antecedent must be denied. This valid process is called modus tollens.

Refutational mode

5. If the antecedent and the consequent are affirmed, nothing follows with respect to the compound.

6. If the antecedent is affirmed and the consequent is denied, it follows that the compound is false. This refutes the implication.

7. If the antecedent is denied, nothing follows with respect to the compound.

This is more complex than the other forms of compounding propositions, because here the order in which the two components are placed is significant. Hence, in the inferential mode of reasoning, the first two rules deal with proceeding from the antecedent to the consequent; and the only valid one is modus ponens (the "putting mode" which could be translated as the "affirmational mode," since it goes from affirmation to affirmation). The second two go backwards from the consequent, and in this case, the valid one is the modus tollens (the "taking mode" or "denial mode") of a denial's implying a denial.(9)

The invalid mode "arguing" from the truth of the consequent to the truth of the antecedent is the reason why no scientific theory can be verified. Every scientific theory is of the form "p implies q," because it is giving the cause of the effect in question; and obviously all the theory's predictions are the result of reasoning from the truth of the theory to the necessary truth of the results predicted (modus ponens). But by observing that these predictions are in fact true, when you test the theory, you are now in the mode of reasoning from the consequent to the antecedent, and arguing from the truth of the consequent is invalid. You can refute a theory by showing that its predictions are false, but you can't verify one by its predictions.

And of course, this is why I called arguing from the components to the compound the "refutational" mode of reasoning, because if you analyze what you are doing, you are supposing that the compound's relation to the components in it is an implication (the inferential, normal mode), and you are arguing backwards--in which case, the only valid reasoning is the modus tollens. Contemporary logic has surreptitiously assumed that the components can "generate" the compound as well as the compound "generating" the components, precisely because the refutational mode of reasoning works; but what they didn't see is that the form of the relation of the compound to the components is in all cases "compound implies something about components," not "compound if and only if something about components" (p implies q and q implies p).(10)

Now then, if you want to use contemporary logic as a guide for making inferences, and you want your conclusions to express facts if your premises do, then you have to add to the truth table a column indicating whether the connective is "true" or not: that is, whether it is applicable to these propositions. Just as the truth of the propositions is verified extra-logically, so whether the connective belongs is also verified extra-logically. But once this is done, symbolic logic will follow the rules I have given above.

For those who are interested, here are a couple of examples of what the truth tables will look like:

I think you can see what happens. The compound proposition now is false on the last four lines of the table, which is to say whenever the connective is inappropriate. Since contemporary logic ignores these last four lines, it makes the compound true in all these cases, when in fact it isn't.

The truth table for "implies," however, is somewhat peculi can't depend on something false, because there's nothing to depend on if that's the case. Hence, the connective is inappropriate whenever "p" is false, as well as inappropriate sometimes when "p" is true. Therefore, the truth table needs only six lines to cover all the possibilities and make symbolic logic conform to formal implication.

Now of course, since truth tables are not used except at the most elementary level, all of the logical transformations (the "short cuts" and theorems) would have to be worked out taking this extra information into account. But that is not the point of this chapter, and I rather suspect it's not something for me to do. I simply propose it as a suggestion if anyone wants to do logic that looks like contemporary logic, and still guarantee that his conclusions will have to state facts if his premises state facts.

In any case, these are the basic ways of combining propositions. Other connectives we use are more or less complicated combinations of them.

There is one, however, that is quite interesting, because it seems so simple and yet is so complex: "but." I am not talking of "but" as it is usually used in traditional logic (All men are mortal but John is a man, therefore), because that "but" is simply "and" in disguise and is not used that way in ordinary speech.(11)

I am talking about "but" in the sense of "The sun is shining but it is raining."

What does "but" mean? It means, "The statement I am about to utter would seem to be the opposite of what would follow from what I just said and what I just said is true and the statement to follow is true" (which of course implies that the inference you were going to draw from what I just said is invalid). For instance, "The sun is shining, but it's raining" means, "The sun is shining and you might infer that it wouldn't be raining; and (despite that), it's raining." Or, "The sun is shining and you might infer that it's raining, and the inference is not valid, because it's raining."

Schematically, "but" means the following: "p and not-q and not (p implies q)"'--using the symbols from now on to reflect statement logic, not contemporary logic. The thing that keeps it from being a simple "and" connecting an affirmative and negative proposition is the implicit implication that is refuted by it. You wouldn't connect two statements with a "but" unless you expected your hearer to disagree with the second one because of what he thought followed from the first one. So "but" is a way of steering the hearer away from an invalid inference, and preventing him from leaping to a false conclusion. But it isn't treated logically as a separate connective (he said, connecting this statement with the preceding by "but") because (he said giving the antecedent to this consequent) it can be described as a combination of simpler connectives.

I don't suppose it is amiss also to mention "because" as a connective. This gives the conclusion of an inference first, and then the fact that implies it; so the form is "q and (p implies q) and p." So, "I am home because I am not at work" says, "I am home, and if I am not at work, then I am home; and I am not at work." This actually gives what is dependent first and then what it depends on, which reflects the way we reason when we go from effect to cause, and so is more natural actually than showing the relation of dependence (the compound), and then arguing from the independent in that relation to what depends on it--not that this is illegitimate, nor that it might not be clearer in revealing the reasoning process.


Notes

1. I owe this insight to a suggestion from my son.

2. I should point out that some logical notations of the truth table list the column under "p" as I did (TFTF) while others list it (TTFF) and use the former for "q." For practical reasons, I happen to think that the way I did it is preferable, because if there is to be an "r," then the eight lines under the "p" is (TFTFTFTF), that under the "q," (TTFFTTFF), and under the "r," (TTTTFFFF). If there is also to be an "s," then the sixteen lines of these three will just repeat, and you have (TTTTTTTTFFFFFFFF) for "s," and so on. It just makes things easier to write, which, believe me, can be a blessing if you have to go on to "t," "u," and beyond.

3. By the way, conjoining propositions is called "logical multiplication" in contemporary logic, while "and/or" is called "logical addition." Now that you've heard the terms, forget them. This profusion of technical words is to me another way of making the smoke screen contemporary logic is hiding behind thicker.

4. Of course, if I wanted to talk à la contemporary logic, I could get picky and say, "If two statements don't happen simultaneously to be true then they're in fact incompatible with each other (because one is true and the other is false or both are false). But what I'm going to try to establish is that this "factual" sense of incompatibility is never intended by "not both p and q" as a statement.

5. Interestingly, the statement about general statements above is a "is incompatible with" statement, which, if it is couched in the present tense precisely asserts the incompatibility of a statement's being non-tautological with its having predictive force. (And it's obvious from the context that Hume meant this.) But of course incompatibility would allow you to make predictions from it (which the context shows Hume also intended)--which would make it a non-tautological general statement with predictive force. I've never been able to figure out how Hume has been able to get away with some of the things he's said.

6. Of course, here we would run into the informal fallacy of irrelevance, because "q" is in the conclusion and not in the premises. But the inference is formally valid in contemporary logic in any case, and we can permit this in order to eliminate clutter.

7. For some reason, logicians don't seem to like the idea of applying propositional logic to predicate logic. I have made statements about how, based on what I have been saying, statements with subjects and predicates go together (as in the Square of Opposition) below, and have been met with, "Now don't go confusing propositional and predicate logic." For heaven's sake! Don't the propositions in propositional logic have subjects and predicates? In that case, everything that is said in propositional logic will have to be true in predicate logic; though the converse is, of course, not true.

8. Note, by the way, the fact that this is the foundation of astrology means that the science as a science depends on it ("If the earth is at the center, etc., then there is a science of the influences of heavenly bodies on our lives.)--and so by contemporary logic, the fact that this foundation is false implies that the science is a true science. That dilemma in the philosophy of science was pointed out by Carl Hempel.

9. If you want the traditional names for the invalid modes, they are ponendo tollens and tollendo ponens. Even if you don't know Latin, I assume that you're clever enough to figure out what they mean and therefore which they apply to.

10. Note once again that in symbolic logic, the "if and only if" proposition is not the same as "and" because it is true both when both components are true and both are false. Hence, you can "prove" that John is at home if and only if no one else is at home by finding an instance of nobody's being at home. To put it another way, nobody's being at home implies that John is always alone when he's at home.

In actual logic, "if and only if" differs from "and" in that "and" simply asserts some connection in which both have to be true, while "if and only if" means that there is interdependence between the facts indicated by the statements.

11. In fact, it's a translation of the Greek word de, which simply means, "what follows adds to what preceded," rather than alla, the adversative "but," which is our only use of the word "but." We use "and" for additional information as well as for information that is just in general connected with what preceded.

Chapter 6

Compounds using subjects and predicates

That is all I am going to say about connecting whole propositions. Now to proceed to logical ways in which the peculiarities of subjects and predicates can allow you to combine two propositions and draw conclusions from them, let me start with "square of opposition," which is, the four possible propositions that have the same subject and predicate. It turns out that they are connected in all the ways we have talked about except "and." When we are talking about any old subject and any old predicate, by the way, the capital S is used for "Subject" and P is used for "Predicate."

So the two definite propositions (the contraries) are incompatible with each other. It can't simultaneously be the case that every human being is a typist and every human being is not a typist--and using this example, you can see that they both happen to be false.

The definite propositions imply the corresponding indefinite one (using the rule I gave above that subjects are always taken to refer to something)--and the implied one is called the "subaltern" of its definite proposition. If every human being were a typist, then at least one would be a typist; if every human being were not a typist, then at least one would not be a typist.

The indefinite propositions (the subcontraries) are "and/or's": "At least one human being is a typist and/or at least one human being is not a typist"; and with this example both happen to be true; but they can't both be false.

Finally, each of the propositions is related by "either/or" to its contradictory (the one that has the opposite reference and the opposite "quality"). Either every human being is a typist, or at least one is not a typist. Either at least one human being is a typist, or every human being is not a typist.



Chapter 7

The categorical syllogism

The last of the major operations I am going to treat is the "categorical syllogism," (from the Greek word for "predicate") first formulated by Aristotle, and one of the great achievements of the human mind.

Since the predicate term implicitly refers to a set of objects, it can be thought of as a case of class inclusion--and in fact, this is how Aristotle thought of it. For this reason, in the traditional formulation of the syllogism, the proposition involving the larger classes is put first, and the one with the smaller second, as in "All human beings are mortal, but [i.e. "and"] sailors are human beings; therefore all sailors are mortal." But notice that in this arrangement the term that mediates between what will become the subject and predicate of the conclusion (called, traditionally, the "middle term") is at the extremes of the two propositions (the subject in the first, and the predicate in the second: on the "outside," so to speak, of the propositions).

But it makes sense if you think of it as saying that human beings are included as an indefinite part of the class of mortal beings; and sailors are included within that smaller class; and so obviously they are included within the larger class that the smaller is included within.

Still, I don't think this is really what is going on in predication, as I have said both in Chapter 5 of Section 3 of the third part 3.3.5, and earlier in this section on logic; and as a matter of fact, this way of looking at predication makes obvious inferences like "A horse is an animal, and the head of a horse is the head of a horse, therefore, the head of a horse is the head of an animal" not able to be done. It can be done easily by a little rule, if you take the approach to the categorical syllogism that I am going to take, following L. Susan Stebbing.

The idea is since the predicate as a word also in itself refers to a set of objects as well as expressing the meaning (which is its function as predicate), then predication involves a relation between the class of objects pointed to by the subject and the class of objects potentially pointed to by the predicate.

Now since the predicate does implicitly point to a class of objects, predicates could be applied to it as if it were a subject; and sometimes the relation between its subject and the relation of it to a given potential predicate of it is what they call "transitive."

A "transitive" relation is a relation that applies in a chain-wise fashion. Some relations are transitive and some aren't. For instance, the relation "is the ancestor of" is transitive, because if John is the ancestor of Frank, and Frank is the ancestor of James, then John is the ancestor of James. But the relation "is the father of" is not transitive, because if John is the father of Frank and Frank is the father of James, that does not make John the father of James, but his grandfather.

Basically, the rules for the categorical syllogism simply list the times when the relation of predication is transitive. If you violate one of the rules, you run into a case where the relation of predication is like using "is the father of" instead of "is the ancestor of." Not surprisingly, since you are looking at the predicate as if it were referring to a class (instead of in its actual meaning-function), this relation of predication will be very similar to the relation "is included within" and "is excluded from."

But if you think of the relation as one of predication and not class inclusion and exclusion, then the syllogism above reads (putting it into logical form now): "Every sailor is a human being and every human being is something mortal; and so every sailor is something mortal." Here it is obvious that "human being" is what mediates between sailors and mortal things; the extreme terms are where they belong: the subject first and the predicate last; and the middle term is in the middle.

Now of course you can think of class inclusion in this way also. In this case, the smallest class is inside the middle one and the middle one inside the largest. Aristotle started from the largest, that is all. Traditional logic therefore calls the premise containing the largest class the "major" premise, and the one containing the smallest class the "minor" premise'--and if you think in this way, it makes sense to put the larger first. But I think a shift in terminology will put things closer to the way we speak and the way we follow speech.

The subject premise is the premise that contains what will be the subject of the conclusion, whether this term is the subject of its premise or not.

The predicate premise is the premise containing what will be the predicate of the conclusion whether it is the predicate of its premise or not.

The subject term is the term that is to be the subject of the conclusion.

The predicate term is the term that is to be the predicate of the conclusion.

The middle term is the term that does not appear in the conclusion.

So the subject term may be the predicate of the subject premise, or it may be its subject; and similarly with the predicate term. If you just say "subject" or "predicate" without adding "term" then you are talking about the premise the term is in. If you add "term" to this, you are talking about the fact that it is the subject or predicate of the conclusion, not of the premise it is in.

This is what I warned you about in the beginning of this section when I was defining "subject" and "predicate." Even though there may be some confusion here, I think it is clearer than calling the subject term the "minor term" and the predicate term the "major term." With my terminology, you can see what the function of the term is.

Now then, as I say, the rules of the categorical syllogism are simply the conditions under which predication is transitive. Traditionally there are nine rules; but two of them can be derived from the others. Some also do not give the first two of the traditional rules as rules, because they are contained in the very form of the syllogism itself; but I think they belong because they define the form. Some also combine my rules six and seven with an "if and only if" statement; but since this reads as two implications, I think they are better separated.

Rules for the categorical syllogism

1. There must be three and only three propositions.

2. There must be three and only three terms.

3. The middle term must be definite at least once.

4. If a term is definite in the conclusion, it must be definite in its premise.

5. Both premises may not be negative.

6. If one premise is negative, the conclusion must be negative.

7. If both premises are affirmative, the conclusion must be affirmative.

Why these rules?

As far as the first rule is concerned, in a categorical sorites you don't have three propositions; but the point of the first two rules is that the number of terms used equals the number of propositions. But sticking with the syllogism itself, there is just one transitive relation, which will involve three propositions: two premises and a conclusion.

As to the second rule, it is most often violated by using the same word (or phrase or clause) as two different terms. No one would offer something like this as a syllogism: "Every sailor is a human being and every fish is mortal; and so every sailor is mortal." It is obvious that no mediation is going on here. But here, for instance, the fourth term is not obvious: "The conclusion of every valid argument is true, and everything that is true is factually the case; therefore the conclusion of every valid argument is factually the case." But as we saw "true" the first time means "follows according to the rules from the premises, and the second time means, of course, "is factually the case." But if the premises are factually false, then the conclusion is still "true" in the sense that it follows; but it may be factually false. So in some cases, the two middle words point to different sets of objects; and so the conclusion doesn't follow. This is, of course, what is traditionally called a "four term syllogism."

As to the third rule, the easiest and most common way to violate it is to have the middle term the predicate of two affirmative propositions. If it is, it is indefinite both times; and it doesn't have "at least one" to go with it to show this, because predicates don't carry the tag of what part of the class they point to, since they're not really pointing.

But that makes this the "guilt by association" type of fallacy. For instance, "Every murderer is someone who violates society's rules, and every drug addict is someone who violates society's rules; and so every murderer is a drug addict."

If you think of this in terms of class inclusion, you know that the whole class of murderers is somewhere inside that of violators of society's rules; and the class of drug addicts is also somewhere inside that same larger class; but what this doesn't tell you is whether they're inside each other, partially overlap, or are in completely separate parts of the larger set. Demagogues use this rule quite a bit, and usually in this form, because its invalidity is disguised.

As to the fourth rule, the reason a term that is indefinite in the premise has to be definite in the conclusion is what we saw earlier in discussing conversion; your conclusion would be going beyond your evidence. From "at least one" you can't argue to "every" member of the same class. Note, however, that the middle term can be indefinite in the subject premise and definite in the predicate premise, and so appear to be "going from indefinite to definite." But here you are not concluding to anything; both of these are premises, and so don't depend on each other. That is, "Every sailor is a human being (indefinite), and every human being (definite) is a mortal thing" yields a legitimate inference to "Every sailor is a mortal thing."

As to the fifth rule, it is easiest to see the reason for it in terms of class inclusion. If two classes are excluded from a third, they could be anywhere in the universe, and don't have to have anything to do with each other, even though it's possible that one could be wholly or partially included in the other. From the point of view of predication, it basically says that no mediation is possible when the subject doesn't belong to the "middle" and the "middle" doesn't belong to its predicate.

As to the sixth rule, its necessity can be seen from what happens if you obvert the negative premise, as in "Every sailor is a human being and every human being is not a horse." If you obvert the second premise you get, "Every human being is a non-horse" without changing the meaning; and it is obvious that from this you can't conclude to anything about horses.

Just as the sixth rule essentially says you can't argue to a connection from a disconnection, so this seventh rule says that you can't argue to a disconnection by connecting.

Those are the rules, and a little bit of why they are the rules. Applications can be quite intricate, of course. In fact, I think it is worth mentioning even in this sketch that there are several possible "figures" (arrangements of subject and predicate) that the categorical syllogism can take, the clearest of which is the first, and the most unclear the fourth. Here, the dots between the letters for Subject term Middle term and Predicate term simply indicate some kind of copula, either affirmative or negative.



I         II         III         IV

S . M    S . M    M . S    M . S

M . P    P . M    M . P    P . M

S . P      S . P      S . P      S . P



Logicians have developed special rules for each figure (such as that in the second, one premise and the conclusion must be negative); but they are simply applications of the general rules, and so I personally don't see any reason why anyone would be forced (as I was) to learn them.

I said that the last figure was the most unclear. An example would be, "Every horse is an animal, and every maverick is a horse; therefore at least one animal is a maverick." I wrote this into a textbook and in my first version, I drew the conclusion, "At least one horse is a maverick," which uses the middle term "horse" three times. It is just a very confusing way of arranging terms, and is to be avoided. If you see it, convert one of the premises and go on from there, and you will be able to follow what is happening.

Of course, there is also the traditional arrangement of these "figures" with the second line first and the first line second. It is, as I say, somewhat less clear than the arrangement I gave; and the clarity deteriorates with the less clear figures.

Now then, I mentioned that the addition of a rule (and a corollary of it) to traditional logic would make the categorical syllogism fit things like the head of a horse.

Now the reason this won't work in traditional logic is that the mediating term becomes embedded as part of the middle term; and the traditional way of treating the term deals with the middle term as a whole. For instance, if you talk about the head of a horse, the term is "head of a horse," and you can't argue from the "horseness" of the horse in this form. But it's obvious that in the real world of reasoning you can. Hence the following rule:

Rule of substitution: If a term appears as part of a more complex term, then any predicate of the part can, in its indefinite form, be substituted for the term which is the part.

This handles affirmative propositions. To flesh out the enthymeme "A horse is an animal; therefore the head of a horse is the head of an animal," we begin with a tautology for the subject premise: "Every head of a horse is a head of a horse; and every horse is an animal; therefore (by substitution), every head of a horse is a head of at least one animal."

Similarly, "John loves Mary and Mary is a woman, therefore John loves a woman" becomes "John is something that loves Mary, and (every) Mary is a woman; therefore John is something that loves at least one woman." Note that here you have not made Mary the only woman in John's life--which wouldn't in fact follow from the fact that he loves her, unfortunately.

I might point out that in contemporary logic as it stands, this kind of inference can be made; but you need twelve steps to do it.

Dealing with negative propositions is more complex. To make the substitution, you first start with the tautology, then obvert the negative proposition, giving you an affirmative one you can use for substitution, thus: To formalize, "A horse is not a fish, and so the head of a horse is not the head of a fish," we start, "Every head of a horse is a head of a horse, and every horse is a non-fish." Substituting gives us "Every head of a horse is a head of a non-fish."

But even though this says the same thing as "Every head of a horse is not the head of a fish, you need another rule to make it come out that way:

Rule of substitutional obversion: The obverse of a term can be substituted for a term contained within a more complex term if the whole proposition is changed from affirmative to negative or vice versa.

So the complete inference goes like this "Every head of a horse is a head of a horse, and every horse is a non-fish; therefore, every head of a horse is a head of at least one non-fish; therefore, (by substitutional obversion), every head of a horse is not a head of at least one fish."

Or, "John doesn't love Mary and Mary is a woman, therefore John doesn't love a woman" becomes, "John is a non-lover of Mary, and Mary is a woman; therefore John is a non-lover of at least one woman; therefore, John is not a lover of at least one woman," which as you can see is clearer than the conclusion drawn the informal way.

I don't at the moment see how you can infer from the horse and fish that the head of a horse is not the head of any fish at all (which knowledge of horses and fishes tell me is true); but those more ingenious than I can probably modify these rules to get around the difficulty--or maybe, as in the case of John and Mary, you can't legitimately conclude that from the fact that a horse is not a fish to something general about their heads.

In any case, this is all I am going to say about logic and its relation to statements.



Section 3

Mathematics


Chapter 1

The different kinds of logic

Mathematics is a good deal like the logic of propositions in one sense, in that it doesn't deal directly with the real world; but there are important differences between the way mathematics does things and the way formal logic (the logic of statements) does--which is why, as I said, it is not a good idea to model formal logic on mathematics. And, like formal logic, I think modern mathematics has also got itself into difficulties, in this case in the area of infinite sets, and so I am going to make a few critical comments; because I think the difficulties are philosophical, not mathematical, and it doesn't follow that good mathematicians know all about how mathematical thinking works, any more than it follows that good drivers know how cars work.

One reason mathematics is thought to be the same as logic is that it isn't an empirical science. If it was once thought that 5 + 7 = 12 was something rooted in the nature of things, it is now realized (correctly, I think) that the roots are considerably more tenuous than we thought they were. There are number systems (such as the hexadecimal system used in computers) in which 9 + 9 = 12 (because 12 in this system means "one 'sixteen' and two units). But that, of course, is a quibble of at what digit you choose to have the Arabic form of numbering system repeat.

However, there are different geometries from Euclid's today, which work quite well in the real world, thank you; and so Euclid's "axioms" are not axioms in the sense he thought of them: universal truths about how figures "really are."



Chapter 2

The foundation of mathematics

Then what is mathematics all about, basically? This is tied to the question of why it works, since in itself it is a closed system and neither needs verification nor application to justify itself. Nevertheless it does indeed apply to a wide range of things, and especially to what is measurable.

This is why many philosophers, from ancient times on, have thought that mathematics is the science of quantity. But, though I think its major application is in dealing with what is quantified, I don't think that this is really its essence.

As I see it, this is what mathematics is:

Mathematics is the science of relationships and the related as such.

Most of the relationships mathematics has so far concerned itself with have been quantitative ones; but contemporary mathematics has gone rather beyond this and begun exploring relationships that don't necessarily involve counting and measuring, such as the "belonging to" relationship of set theory. And as mathematics discovers more and more what it is doing, it is quite possible that there could be branches of mathematics that explore relations like causality, similarity, inherence, and so on; and who knows? Some of these might turn out to be fruitful and applicable to problems in the real world.

At any rate, the tie of mathematics to the real world is that in the real world, and certainly in the real world as known by us, there are relationships. It isn't surprising that an exploration of what a given one of these relationships is and what is implied in having things related in this way would have an application to things that are related in the way in question.

One of the reasons mathematics is difficult for ordinary people to follow is that we start from objects and abstract the relationships from them, as we saw in Section 3 of the third part. Mathematics supposes the relationship to have already been discovered, and doesn't care what it came from. We discover, for instance, that objects belong to classes of, say, similar objects. Mathematics says, "Let's look at what 'belonging to' means and implies."

Hence, 1. Mathematics starts with the relationship itself.

2. Mathematics then defines (i.e. makes up imaginary) "objects," whose sole meaning is "to be the object of this particular relationship."

3. Mathematics then asserts a set of basic facts about these invented "objects" based on the meaning of the relationship they have with each other. These are the "axioms" of the system.

4. Mathematics then draws out the logical implications of these facts about the objects related in this way. These are the "theorems" of the system.

One of the reasons formal logic doesn't work as a mathematical system is that statements have meaning as well as truth, and the two can't be divorced from one another. Hence, if you want to talk about "truth-functions" and create objects called "propositions" which are supposed to have nothing but truth or falsity and connectives which are supposed to be nothing but truth-functional, you are falsifying the relationships of statements with each other and with each other's truth; and so your logic will not work.

On the other hand, you can separate out "belonging to" or "in addition to" or "beside" from other relations the objects have; and so there is no falsification going on if you explore just the relationship, say, of "belonging to" by making up objects called "sets" which you then define as "what is belonged to" and "members" whose definition is "what belongs to."

And right here is where most people who aren't of a mathematical turn of mind have one of their major difficulties. "Yes, but what is a set?" they ask. "Give me an example of one." If a mathematician is being true to his calling, he precisely can't give an example of a set, because there's nothing in the world that does absolutely nothing but get belonged to by members--or better, does absolutely nothing but get involved in belonging-to relations, since sets can belong to other sets. But they can't do anything else but belong to or be belonged to; because they were invented precisely to do nothing else, so that the relationship "belonging to" could be explored without any distraction.

And that's the idea of these "objects." Since they have no other raison d' être than to be "whatever is related by the relation in question," and therefore they have only the one aspect which is the foundation of this relationship and no other, then the dream of the empirical scientist is fulfilled: to have isolated the objects of his investigation from all distracting acts and characteristics.

So, exactly backwards from the way we normally understand, with the objects and their messy multiple acts first, and the relations understood from them, mathematics takes the relationship first and derives the objects from it. In so doing, of course, it sacrifices having its objects tied to the real world; only the relationship itself has any tie to the real world. But of course, since there are real objects related in this way, then the mathematical objects will be abstractions of them.

But it is important to see that the mathematician does not get his objects by abstracting from real objects that have the relationship he is interested in and finding what is "common" among all of them. This would make his science empirically verifiable or falsifiable, and it isn't. No, the objects are simply made up because relations need relata, and these relata are created to be nothing but the relata of this relation. Thus, when a mathematician "defines" a set as "a collection of objects," he is just helping you out and kowtowing to your way of thinking, so you won't put him away in a padded cell. He knows that this isn't what a set is, because "collections" have all sorts of properties in addition to "being belonged to," and "objects" have more to them than just belonging to sets. It is only when you get fairly deeply into mathematics that he lets you in on the secret--and probably not even as explicitly as I have done. Certainly none of my mathematics teachers ever did, and I have studied some mathematics at the postgraduate level. I've had plenty of hints, but no one came right out and said it in so many words.

Now of course, the mathematical system in question will only be applicable to real objects insofar as the mathematician's objects have the single property which is one property of real objects related in the same way; but if they don't, his system isn't false, but just inapplicable to the real world. Actually, the inapplicability wouldn't be the fault of the objects, exactly, but that the relation in the real world is different from what the mathematician meant when he used the same word. Thus, for instance, set theory does not fit the relation of "belonging to" in the sense of ownership; neither a member nor a set owns anything.

But mathematicians like to justify their existence as much as anyone else does, and so they try to make the relations they are dealing with as nearly as possible the same as the basic meaning (or at least one common meaning) of the word they use to express it. To the extent that they succeed, to that extent the mathematical objects will be abstractions of the real objects related in this way, and the math will apply.

Now the relation is defined in mathematics by the axioms. The "definitions" deal with what the objects are in terms of what the relation is; the relation itself is defined as a series of facts of what these objects do to each other (i.e. how they connect with each other). Thus, for instance, one of the axioms of set theory is that a member belongs to a set, but a set may not belong to a member. Another is that a set may belong to another set, in which case it is called a "subset." Another is that if a set is a subset of another set, all of its members are also members of the other set, and so on.

The idea here is to get as few statements as possible that define the relationship exactly and give all and only the independent possibilities and the impossibilities for the objects of that particular relationship. These are the axioms.

It is easy enough to make up a set of axioms for some relationship you just create out of whole cloth and give a name to, like the relationship of "jonesing." You define objects such as smiths and knopfs and then make up axioms like, "Every smith can jones a knopf, but two and only two knopfs can jones a smith." "If a smith joneses a smith, then it cannot simultaneously jones a knopf." "A knopf can jones one knopf, but the result is two knopfs." And so on.

What are you talking about? That's it. Precisely nothing, because in the system there is no meaning to "jonesing" except what the axioms say--and of course no meaning to "smiths" and "knopfs" except that they "jones" each other according to the axioms. In this case "to jones" has no meaning outside the system either, so it's all a game; it's when the relationship means something outside the system that mathematics makes sense to non-mathematicians--and mathematicians can hope to get paid for doing their thing.

The tricky part of the axioms comes when you're dealing with a relationship that means something in the real world. Then you have to see to it that (a) your axioms exhaust all of the independent aspects of this relationship, (b) that you don't introduce something as an axiom that only sometimes is true of the relationship in the real world; and (c) that you don't bring in something that seems to be part of the relationship in the real world but is actually a different property of the objects that happen to be related in this way.

Euclid, who was a towering genius, introduced into his geometry the famous "parallel postulate" (axioms and postulates nowadays mean the same thing), that one and only one line parallel to (i.e. never meeting) another line can be drawn through a point. It turns out that this is true only of lines on what we normally think of as a flat surface (plane geometry); but on other types of surfaces, it doesn't apply at all. Hence, it isn't an axiom of geometry as such, but only of one specific type of geometry, not surprisingly called, "Euclidian geometry." The point here is that if Euclid couldn't spot what was irrelevant to what he was doing, we lesser lights are going to have a much worse time picking out all the axioms and ensuring that we have only the axioms for any applicable branch of mathematics.

What happens next is that these different possibilities are combined in various ways to generate new statements about the objects based on the axioms; and these are the theorems.

Beyond that, there is the use of the particular mathematical system by those who want to apply it to the objects that are related in the way in question. These people are not interested in proving theorems from the axioms, but in making statements and drawing out implications from them.



Chapter 3

Some mathematical problems

Perhaps because of this, mathematicians are interested in what they call "closure" and "completeness." As I mentioned in the preceding section, a system is closed when any legitimate operation on an object in the system will keep you still inside the system; and it is complete when any statement in the system follows somehow from the axioms. (In case you are wondering what a "mathematical statement" is, it is the affirmation of some relation among the objects. For example, 2 + 2 = 4 is a mathematical statement. You can see that it can be called true, because it is consistent with the axioms of number theory.)

But since mathematics makes statements and uses logic to draw its conclusions, it would not be surprising to find that it is possible to construct indirectly self-contradictory statements in a particular branch of mathematics. For instance, in set theory, you can talk about "the set of all sets that are not proper subsets of themselves." (A proper subset is, basically, a subset that doesn't contain all the members of the set it is in; if it has all of them, it is improper.) If the set above is not a proper subset of itself, then of course it is one of its members; but since there are the other member sets, this would make it a proper subset of itself, which would exclude it as a member. Something that contradicts itself even by implication obviously has to be ruled out as an object. Ruling these out would not make the system incomplete, any more than referring to an "object" that violates the definition of the object, such as trying to argue mathematically from the Trinity, which is one and also three. This cannot be a mathematical object in number theory, because the "one" of number theory excludes "three."

But the search for completeness leads to one of the interesting conundrums of mathematics. Some years ago, a man named Kurt Gödel showed that, in any mathematical system that was complex enough (which included, of course, all the major areas of mathematics), a mathematical statement existed which was the equivalent of "This statement does not follow from the axioms." And, of course, any system with that statement in it is by definition not complete.

This is not one of those indirectly self-contradictory statements, because there is no necessary connection between a statement's being meaningful and its following from the axioms. That is, there is no intrinsic necessity for saying that every statement that is consistent with the axioms has to be implied by them.

I think the reason is that implication is a logical relationship; and the statements are related to the axioms by logic, not by the relationship that forms the basis of the axioms themselves. And there's no law of logic I know of that says that logic as actually used has to be a closed system.(1)

And when you think about it, to say that something depends on something else (as an effect depends on a cause, or--in this case--a conclusion depends on its premises) doesn't imply that relations between dependents also have to depend on something. So the fact that theorems in mathematics are meaningful statements that depend on the axioms doesn't imply that all meaningful statements that can be made are theorems.

Nevertheless, mathematics wants to make its system as complete as possible, so that the axioms will indeed imply "practically all" statements made in the system. And, of course, it wants its system closed, so that whatever is done in accordance with the axioms will remain a meaningful statement.

In discussing this in the preceding section, I mentioned how the various kinds of numbers were created to preserve closure. Any kind of mathematics with pretensions to applicability also wants to preserve the two kinds of statement equivalent to affirmation and denial within its system; and within the system the equivalent of a denial is called an "inverse" of the statement or operation in question. Thus, in the number system, subtraction is the inverse of addition (and vice versa, of course), division the inverse of multiplication, taking the root the inverse of raising to a power, differentiation the inverse of integration, and so on.

And what was discovered is that it's one thing to have a closed system on one operation; but to have it closed on the operation and its inverse is something else again. So the integers were invented to close subtraction; but this created the number zero, which was neither positive nor negative, but was needed to take care of performing the inverse operation on the same number (as 3 - 3 or -3 + 3). In order to include this number in multiplication, the rule was made that any number multiplied by zero gave a result of zero (because multiplication by 1 gave you the number itself). Everything was fine with respect to multiplication now, but the inverse meant that zero divided by any number would have to be zero (because its inverse would be zero times that number--take the result of the division, zero, and multiply it by the number, and you have to get the original number, zero). But then, what about dividing by zero? In all cases but one, it can't have an inverse. That is, take 7 divided by zero. What would it be? Not 7, because the inverse (7 times zero) is zero, not the 7 you started with. Not zero, because zero times zero is zero, not 7. And certainly not any other number.

So mathematics had to throw up its hands and say, "Division by zero is forbidden." The attempt to close the system and keep the inverses had resulted in an operation that gave meaningless results.

Nevertheless, there is one division by zero that is not meaningless, because its inverse gives a result: Zero divided by zero. The trouble with this is that you can assign any number you want as its quotient (result, for those of you who have forgotten your division), and the inverse will work. For instance, 0/0 = 322. Well, 322 x 0 = 0. So it works. Hence, the operation in this case is meaningful but indeterminate.

Why am I bothering with this? Because it turns out that there is an application for it which was discovered by Isaac Newton and Gottfried Leibniz more or less at the same time but independently of each other--and each of them developed the system from abstraction from its applications (Newton from investigating motion, Leibniz to show that his theory of monads worked), and mathematicians ever since have been racking their brains to show why it is mathematically legitimate.

I am talking about the differential and integral calculus, of course. Let me give you the standard justification for it, which mathematics has more or less settled on, and which is riddled with inconsistencies: the notion of the "limit."

The idea of the limit is that if a given result of a mathematical operation gets closer to a certain number (or stays at that number) as the objects operated on get smaller and smaller,(2)

then it makes sense to say that you know what the result would be if they actually got to zero. Their actually being at zero is ruled out for one reason or another by the laws of mathematics (such as its being illegitimate to divide by zero); but if it did make sense, we know what the answer would be. That answer is called the "limit."

Now mathematicians talk about getting "really close" to the limit by being in the "epsilon neighborhood" of it. By this they mean "Take a really tiny number--and I mean really really tiny, and call it 'epsilon,' and I'll show you a 'delta' which is even smaller." It's supposed to be a number so small that its variance from the impossible one is so close as to make no difference; and if the result is all right at this range, then "that's good enough for practical purposes."

Mathematically, of course, that's nonsense. No matter how close your point is from your target point on a line, you still have just as many points as are in a line a hundred miles long between you and the target. I could prove this, but ask your neighborhood mathematician to do it for you. Any line has an infinity of points in it, just exactly as many as the points in any other line.

And the limit is an exact number, not a very close approximation. Let me refer back to a case where the limit is approached as the number becomes larger: the supposed mathematical "solution" to Zeno's paradox about crossing the room that I talked about in Chapter 5 of Section 3 of the second part 2.3.5. There, you will recall, the argument was that to cross the room, you first had to go half way, then half of the rest, then half of the rest, and so on; and you can never get there, because you still have half of the remaining distance to go no matter what point you've reached.

I solved that paradox there by saying that the motion across the room was one act, not a series of starts and stops; but what I am interested in here is why the concept of the limit doesn't solve it, even though some mathematicians who don't understand what the limit means think it does.

Now the distance to the other side of the room is the whole distance (corresponding to the number 1), and this is broken up into the series (1/2 + 1/4 + 1/8 + 1/16 + ... + 1/2n + ...). If you look at the sums at each stage, you see that they are 3/4, 7/8, 15/16, ... n-1/n ...; so that the larger n becomes, the closer the fraction is to 1. The limit, therefore, of this series "as n becomes infinite" is 1.

"Therefore," say the mathematicians, "you can get there." No you can't, I answer. The limit is the definite place you can't get to and can't get beyond; though you can get as close to it as you like.

That is, you could get to the limit if this number meant anything: . But (called "infinity") is just "the last number," and the number system is defined in such a way that there is no last number. It is not speaking properly to say that the numbers in the series "approach infinity," as if it were a number to be approached, but that they "become infinite," meaning that they just keep getting larger and larger without stopping. So that "number" is just a sign of a process, not a number at all. I'm speaking within mathematics here, not commenting on it; any mathematician would agree with what I am saying. Zero is a number but "infinity" isn't.

But since you get closer and closer to 1 as the numbers in the fraction "become infinite," then 1 is the place you would get if it were ever possible to get there (which it isn't). So you still can't get across the room. The only thing the limit says is that the other side (and not, say, the ceiling) is the place you can't get to. So Zeno's paradox is only defined, not solved, by the notion of the limit.

Similarly, if you are traveling at 32 miles in one hour, you're traveling 32 miles an hour; if you keep going a half hour longer and you go 16 miles farther, you're still going 32 miles an hour; of you go for a quarter hour and do 8 miles, you're still at the same speed--and so on. If, as the time of your travel gets shorter and shorter, the ratio between the distance and the time (the speed) remains the same, even when the time gets down into nanoseconds, then we can safely assume that you're keeping a steady pace. So what speed are you traveling at a given instant?

Well, if you consider a speed a distance divided by a time (it isn't actually, as I said in Chapter 6 of Section 3 of the second part 2.3.6, but we're doing mathematics here, not physics), then you've got a distance which is "infinitesimally small" divided by zero. No you haven't. If your distance is anything but zero, then the time is not zero (an instant) but one thirty-second of the distance (which would be a finite number). Hence, your "infinitesimally small" distance has to be a zero which in this case is the zero which is thirty-two times as great as the zero in the denominator.

What are you saying? Zero x 32 = zero, of course. But that means that the zero on the right-hand side is a zero which is thirty-two times as great as the zero on the left-hand side.

But that's nonsense, isn't it? No. Divide the zero on the right by 32 (that's legit; it's the other way that's forbidden); you get zero (the particular zero that is one thirty-second of the numerator).

Remember that I said that zero divided by zero is meaningful but indeterminate? Well in special cases like this, where 0/0 is the limit of some "continuous function" (something that boils down to a series more or less like the one I described), then the zeros are defined in relation to each other, and the result is a definite number based on the ratio. Obviously, if you're traveling at a steady 32 miles an hour, you're traveling that speed at any instant of your journey--as you can check by looking at your speedometer, which measures instantaneous velocity, as I said in Chapter 5 of Section 3 of the second part 2.3.5, not some ratio of distance to time.

That's why the calculus works, not because of some "epsilon neighborhood" you get into. 0/0 is an exact number in these cases, not a "very close approximation to something that is meaningless," because in these cases--and only in these--it has meaning.

So given that zero divided by zero is defined in the cases spoken of in the calculus, that means that there is a whole field of numbers you get into in this process, and that you can get out of by integration. That is, these numbers would be something like the negative numbers you get into by subtracting a larger from a smaller number, or the square roots you get into by taking the root of something that's not a perfect square, or the imaginary numbers you get into by taking the square root of a negative number.

Since I have discovered this field of numbers as a field, even though it's been in use already, and since I have shown how you get into it and out of it, I now claim the right to name it:

The philosophical numbers are the numbers entered into by dividing zero by zero when that is defined or in general by following the rules of the differential calculus.



The beautiful numbers are the number system that includes the real numbers, the imaginary numbers, and the philosophical numbers. That is, all the numbers known up to the present.

I will leave it to the mathematicians to work on the number system in the light of this approach. I do think it should make the calculus less of an anomaly than it is at present.

So that's one paradox in mathematics that I think I have been able to do something to solve.

I think, however, that there is another paradox that is due to an implicit taking of a word in two senses, leading to strange results. I am speaking of the theory of infinite sets (and by implication all that follows from it).

An infinite set is one that is cardinally equivalent-- in ordinary language "equal," though it's technically defined, of course (see below)--to a proper subset of itself. We saw "proper" and "improper" subsets above, and to refresh your memory, {1, 2, 3} is a proper subset of {1, 2, 3, 4, 5}, while {5, 4, 3, 2, 1}, for instance, would be an improper subset of it (the arrangement of the members doesn't matter). A set is "cardinally equivalent" to another if you can match up each member of one with one and only one member of the other. Thus, {a, b, c, d, e} is cardinally equivalent to {1, 2, 3, 4, 5}. This is what is meant by "equal" or "having the same number of members as" in set theory.

Now then, if you take the set of the natural numbers, {1, 2, 3, ... n, ...}, you can match this up with the even numbers, {2, 4, 6, ... 2n ...} by the rule implied in the "2n." Since every number has a double, then for any member of the natural numbers, there is one and only one even number that corresponds. Hence, the set of the natural numbers is equal to (cardinally equivalent to) the even numbers. But of course, the set of the natural numbers contains both the even numbers and the odd numbers; and so there are members in it that are not in the even numbers--even though "cardinal equivalence" obviously means that there are the same number of members in both.

Instead of saying, "Wait a minute! We have to have contradicted ourselves somehow!" mathematicians have said, "Well, this is just one of the odd things about infinite sets: that they have the same number of members as part of themselves.(3)

All sorts of bizarre conclusions can be drawn, once you accept that everything is all right with this theory. For instance, the double of the set of the natural numbers is equal to the set; the square of the set of natural numbers is equal to the set. Adding one to the set makes the set equal to what it was. (because you can match the additional 1 to the 1 of the original, and every number from there on to n + 1 in the original).(4)

All very fascinating; but I think that there's a hidden contradiction in the core of set theory; and I don't think that you can really talk about "the set of the natural numbers" as a set. Why? Because you are talking about the set of all the natural numbers, and the natural numbers are so defined that "all" in the sense you'd have to be talking about it has no meaning.

There are, as I said in the preceding section, two senses of "all." The first is the collective sense, in which you would say, "All the members of the class weighed exactly one ton." Here, you're taking "all" in the sense of "all, taken together as a unit." The second is the distributive sense, in which you could say, "All the members of the class are human beings," which is the equivalent of "Every member of the class is a human being." Here you are talking about the members individually, but none of them lacks the property you are attributing to them.

Connected with "every" is "any," which means, "pick out a member at random, and it will have the property I am speaking of." This is obviously an implication of "every"; if every member of the class is a human being, then any member of the class is a human being.

Now then, in talking about the set of the natural numbers, for instance, it has to be defined accurately. And is is defined accurately by {1, 2, 3, ... n ...}. The dots say, "proceed in this fashion" (in this case, by adding 1 to the preceding number); and the "n" says "do this for any number" and the dots after it say, "keep going." So now, you can tell whether any objects in the universe (even the mathematical universe of numbers) belongs to the set or not. For instance, 2/3 does not belong to the set, because it can't be got by adding 1 to a whole number. On the other hand, 753,826,714 obviously belongs to it.

Now in defining the set this way, have you defined all the members, or even every member? You have if "all" means "I have a rule by which I can tell whether any object I meet belongs to the set or not; and I have another rule which tells me how to get any member of the set I want, and another rule which tells me to keep finding members."

But I submit that "all" means more than this, and you can see what I am driving at by considering the statement, "All the members of the class weighed exactly one ton." The point is that the numbers are so defined that "all" in this collective sense of "all taken together as a unit" has no meaning. They can't be taken together, because every number has a number (in fact an infinity of numbers) beyond it, because it is a property of any number that 1 can be added to it. Hence, you could never get through the numbers, and so the "keep going" rule can never be fulfilled.

Now I don't mean "never" in the sense of "not in any finite time" here, meaning merely that if you kept going until the heat-death of the universe, you wouldn't have finished. What I mean is that each time you add 1 to a number, you are just exactly as far away from completion as you were before you did it. Thus, "finishing" is not something that simply cannot ever in practice be accomplished, or even approached, really you're always just as far away from "it" as you ever were; it is something that is self-contradictory.

This is similar to the notion of the limit, which I spoke of earlier, and which might make what I am trying to say clearer. In the case of the series which approaches 1 as a limit (1/2 + 1/4 + ... + 1/2n + ...), I mentioned that this corresponds to the set of sums {1/2, 3/4, 7/8, ... n-1/n ...}. Now if you say that "if you add up all the members of the series, you'll get 1," what you are now saying is that the last member of the set of sums is 1. But clearly this is impossible, because there is no "n" such that "n-1/n = 1" is true. Hence, the limit precisely cannot be attained, because it is meaningless to talk about all the members of the series.

But that same sense of "all" is what you mean by talking about "all" the members of a set. You have a rule which defines "any" member and another one which tells you to keep going; but as above, that rule does not define "all" in the collective sense. In order to have that you need an additional statement or rule that will tell you "and there are no more." Not no "others," because that means "of a different type," and would be excluded by defining "any"; but no additional ones of this type. In other words, in order to define a set, in which the members are to be taken collectively together as a unit, you have to have a rule telling you when to stop including members in it.

So I think that there is something in the relation of "belonging to" that makes infinite sets out of the question; and I think my little demonstration about the sum of an infinite series corresponding to an impossible member of the set of sums shows that the difficulty is real, and that it is connected with the notion of "all" as used in set theory(5).

Where does that leave us? It seems to me that if what I said is true, you can't really talk about "the set of the natural numbers," any more than you can talk about as a number; though you can talk about "the natural numbers" in a kind of rough-and-ready loose sense, just as you can talk about "infinity" in a loose sense and use that symbol to refer to "it," realizing that in both cases you are talking about a continuous operation rather than the result of one. That is, since we know that "the natural numbers" are 1 and any number that follows by addition of 1, we can talk about "1 and 'all' the integers greater than it" as long as it is recognized that in the strict sense this is meaningless.

This concludes all that I have to say about mathematics. The subject is obviously very complex, but I leave it to the mathematicians. All I was interested in showing here is what kind of thinking and reasoning process goes on in mathematics--and based on that, how some of the apparent contradictions in the system can be solved.


Notes

1. In point of fact, that was what contemporary logic was trying to create; but, as I tried to show in the preceding chapter, contemporary symbolic logic cannot be applied to actual statements without claiming that some manifestly false statements are true.

2. Or larger and larger, but we are interested in increasing smallness of the objects here.

3. This notion of "infinite" in contemporary mathematics, I hasten to say, has no connection with the sense of "infinite" that I have been using in talking about God. Since quantity is a limit, then quantitative terms such as numbers simply do not apply to God; and if trying to apply them (as, for example, talking about the Trinity) gets you into self-contradictions that sound like the paradoxes of non-finite mathematics, that is coincidental.

4. Interestingly, in case you're curious, the set of the real numbers (the integers, the fractions, and the square roots) is not equal to the set of the natural numbers, because there's no way you can match up the square roots with natural numbers; there will always be some left over. There is a proof for this, which I forget at the moment, which shows that the attempt to do the matching involves a contradiction. So there are in fact smaller and bigger "infinites" in infinite set theory.

5. Note that "all" as used here is a philosophical term, not a mathematical one. That is, you can define "all" in a given mathematical scheme; but then you have to use it consistently with that definition. If you define "all" to mean what is meant in ordinary language by "any," you can't use it in the collective sense of "all taken together as a unit." And I submit that this is what mathematicians are in practice doing, whatever they say they are doing. So no, Humpty Dumpty, when you define a word, it may mean just what you want it to mean, but then you can't use it as if it meant something different.



Section 4

Science


Chapter 1

Logic and the real world

Since the objects of mathematics are nothing but mental constructs, then really anything goes in talking about them as long as you're consistent; and so there isn't a great deal for us to say on the topic. When you get to the empirical sciences, however, where you're trying to find out facts about the world as it actually is, the problems in how you go about most efficiently achieving your goal become quite extensive.

The main point of what I have to say, however, about science will be found in the theory of effect and cause that I developed in Section 2 of the first part of this book. I think it forms the basic core of what scientists are doing. I put it there, of course, because I happen to think that philosophy's goal is the same as that of the empirical sciences--to find out what is really going on--and so it shouldn't be surprising if its method is basically the same one.

What I am going to do in this section is go through the traditional Five Steps of the scientific method and show how my theory of effect and cause makes sense out of what scientists do. I am not really going to try to refute the myriad other views there are on the topic, except on the basis of an established canon of scientific theory: if my theory explains all that they can explain and does it more simply and more logically, then my theory is to be preferred to theirs.

While I am at it, I will also find occasion to talk about a couple of topics that I have not been able to fit in as yet, such as the laws of probability (and the function of statistics), and the logic of induction. Both of these are heavily used in science, and so discussing them is appropriate here; but neither of them really belong anywhere else except back in Section 2 of the first part; but there, they would have been incidental and only cluttered up the basic theory of cause. The logic of induction, by the way, does not belong in the section on formal logic, because it doesn't proceed, as we will see, from the nature of statements.



Chapter 2

Observation and hypothesis

Science, of course, assumes (or should assume) that the epistemological problem I talked about in the first five sections of the first part has been solved somehow, and that we can know about the real world, and that our knowledge is objective, however dependent it might be on observations. So science starts from facts about the real world. There have been scientists who have subscribed to phenomenalism, because of difficulties they encountered in their investigations (particularly in quantum mechanics); but as I tried to show in the first part, the solution was to reexamine some of the assumptions about our naive notion of "position," for instance, not to accept it and declare that "nothing is real" or that what you are observing is the observing.

So I am going to take it that what scientists do starts from observing facts about the real world, no matter what scientists say they are doing based on some philosophy of science.

But not every observation, not even every careful observation, not even every careful observation involving meticulous measurement, is a scientific observation. It isn't scientific if it doesn't lead to a hypothesis, experiment, theory, and some kind of verification. So if I were to go into my back yard and meticulously weigh and measure each stone in it, and then carefully put it back where I found it and note its location to the tenth of a centimeter on a detailed map of the yard, and then give the pages and pages of data to a geologist, the best I could hope for is that he would look at me and say, "What did you go to all that trouble for?"

The reason for this, as I indicated in the Chapter 1 of Section 2 of the first part 1.2.1, is that what is prior to the first step in scientific method is curiosity, which means thinking that there is an effect (a case of facts contradicting each other) in the world "out there"; and the observation itself is an attempt by the scientist to assure himself that there really is a pair of facts that contradict each other, and that he hasn't been misreading the evidence, and to be precise on what it is.

So the careful observation which simply establishes the fact that there are a number of stones of different sizes and weights in different places in my back yard excites no curiosity in the scientist or anyone else, because that is what one expects to find there; there is no effect to find the cause of. You have nowhere to go once you have listed all these facts.

So immediately, all the palaver that has been around ever since Comte about philosophy's trying to get at (the impossible) "why" of things and science's simply getting at the "how" and giving laws and not "explanations," is just that: palaver--based, interestingly, on Comte's attempted explanation of why religion and philosophy were supposed to have failed as methods of thought, and why "positivism" necessarily would succeed. It's interesting how much influence those, like Comte and Hume, whose theories disprove themselves, have had in subsequent ages.

The point of starting with an effect is, of course, as I noted in Chapter 2 of Section 2 of the first part 1.2.2, that you know a priori that there can't really be a contradiction in the real world, and so the effect you have discovered means that you don't have all the facts. There will be another fact--the cause--which, when added to the effect you have discovered shows that the effect was not really a contradiction after all. We have seen enough examples of this in the course of this book for me not to have to give any here.

One of the reasons some people like Thomas Kuhn have noticed that new theories come about from prior "paradigms" is that an effect is generally something that happens contrary to expectations. This doesn't preclude that you might come across two facts that contradict each other without your expecting anything in particular; but it would obviously be much more common to find something happening contrary to what your previous experience and reason tells you would be happening in this situation. You then have to search for a new paradigm, as Kuhn says, to fit the past experience and this one together. In other words, you have to find a cause that will make the past and this new event make sense.

Notice that, when we are dealing with small discrepancies, we just simply it that the world is more messy than our neat little theories, and we look for a cause in something wrong with the object. If you see leaves turning yellow and dropping off the trees in July, you don't worry about your theory of the seasons and their effect on deciduous plants, you say that some insect or disease is attacking the trees. Things like this only become significant when you realize that the event, however insignificant in itself, makes the theory you have developed about it impossible. We saw the logic of this in the preceding section. The event is a false consequent of an implication, which refutes the antecedent (your expectations). Then you have to rethink the whole thing.

Now then, scientific observation has two functions, as I indicated in the Chapter 3 of Section 2 of the first part 1.2.3: (a) to gather all of the information you can on both sides of the contradiction, so that the effect whose cause you want to find is as complete as you can make it, and (b) to separate as far as you can the effect from what is affected, which has properties that have nothing to do with the effect. It is this second thing that mathematics does by default, as I mentioned, when it makes the mental constructs it calls "objects" and gives them only the properties it needs for the relationship it wants to examine. But in the real world, of course, that's not possible, so you might have to develop some complicated apparatus that can produce artificial environments for your objects to act in, so that they won't be affected in ways that aren't the ones you want to observe. For instance, if you want to test the speed of falling bodies, you put them in a bell jar and suck out all the air, so the speed won't be affected by air resistance.

Of course, if your effect might by any chance have to do with the amount of whatever property is involved, then you're going to have to measure your affected objects carefully, at the risk of losing something that could be crucial to the effect as such. For instance, if you didn't measure the rate of fall of falling bodies, you could come up with some theory like "bodies are attracted to each other," but not the kind of thing that Newton and Einstein developed based on Galileo's observations that bodies that fell seemed (a) all to fall with regularly increasing speed, and (b) the rate of acceleration was the same for all bodies, whatever their weight. This led to Newton's sophisticated Theory of Universal Gravitation.

Measurement is very often very useful in science, even necessary in some sciences like physics. But it should not become a fetish, with people thinking that something can't be scientific unless it involves measurement. These people tend to fall into the opposite fallacy also, that if something is measured carefully, it is scientific. A former dean of the college where I teach, who was a physicist, took the students' evaluations of the teachers (which were on a ten-point scale at the time), averaged up each student's answers to get a general number for that student, then averaged that for the whole class, and then compared that average with the "average average" of all the faculty. If you got a 7.5 (which meant that "average student" thought you in the top quarter of the faculty), but the "average" professor got a 7.8, then you were "below average," because the students in your class didn't think you were as far "above average" as the "average" professor was "above average." When I remonstrated with him that it didn't make sense to say that a person was below average because he was above average, he answered, "Well, that's what the numbers say." I never could convince him that the numbers as he was using them were completely meaningless.

And this points up another handy aspect of using mathematics in science, which is also a serious danger. Since mathematical operations have inverses, then if you are describing your data mathematically, you ought to be able to go either from effect to cause or cause to effect simply by choosing the right operation. For instance, if you start with a derivative, you can set it up as a differential equation and integrate it; and so you don't have to worry trying to figure out ingenious explanations for the effect, it would seem; the mathematics just does it for you.

And of course, since mathematics is a system with strict and defined rules, then when you are applying mathematics, you don't have to think at all; once you get the equation into the proper form, you simply do the operation and out comes the answer. Machines can do this sort of thing just as well as human minds, because it is just a question of mechanically applying the rules--which is why we can hit Mars and Jupiter with our space probes; because the computers with no trouble spit out equations that are fifty pages long, which it would take human beings millennia to do.

No wonder, then, that scientists like mathematics. But it isn't because it gives you that much more insight into the "true nature of things," as Galileo thought; it is just that it's easy to use (it is, really, once you get the hang of it), it looks exact (even when it isn't(1) ), you can perform complicated operations and get an answer even when you don't have the foggiest idea of what the argument you have constructed is, you can épater les bourgeois with all the letters, numbers, and symbols, and know that they couldn't follow you no matter how hard they tried, you can work the mathematics both ways, which looks as if you can even work backwards from the answer to the question, and so on.

It is a great help, no question about it, even though the remarks I made may be taken as disparaging. I am only disparaging those who, like ignorant religious people, mistake the ritual for the worship. We need every help we can get in investigating the extremely complex world of effects; and if mathematics can be applied, by all means apply it to the limit of what it can do. But don't depend on it as being what is "scientific" about science, or as a key to the truth. What is scientific about science is showing how the world is not really self-contradictory, by uncovering the facts that resolve its apparent contradictions.

And this has particular relevance when moving from the stage of observation to that of hypothesis. The hypothesis is, of course, a stab at the explanation of the effect; the picking out of a "p" to go into the implication "p implies q," where "q" is the effect you observed which can't be true unless "p" is true.

Unfortunately, there are an infinity of possibilities for "p," only one of which in fact makes sense out of the effect in question. And there is no mechanical way, and no mathematical way either, to make an exhaustive list of the possible explanations for any given effect, let alone to pick out which of them actually did the job in the case you are considering.

Here is where insight and genius come in. There are no rules here, because it is a question of seeing a relationship--and a relationship, moreover, with something created by using your imagination. At this stage, the crucial one, the logic of science is supremely illogical, though not unintellectual; it is very much like the "inspiration" of the artist, which we will treat in the next section.

The scientist, then, after becoming very clear about what is apparently contradictory about his problem, tries to imagine a situation such that (a) it makes sense in itself, and (b) it will make sense out of his effect.

It is actually quite important to stress this, obvious as it may seem. What science is all about is making sense out of what otherwise doesn't make sense. It is only secondarily trying to "find out the facts about our world." If it were trying to amass facts, then the kind of observation I mentioned about stones in my back yard would be of interest to scientists; but that's not it at all. Scientists are, if you will, anti-existentialists, who simply will not accept the world as absurd and say, "Well, that's the way things are," as Camus and Sartre would have it, for instance. They say, "Things may not be neat and rational, but they can't make nonsense. And they allege all of the progress that science has made as verification of the fact that their attitude is the correct one. (Of course, if you happen to think that things are absurd, then this argument, like all rational arguments, just washes right over you.) Still, I'm with them; I can't see any reason to hold that the world is unreasonable.

Some people call this finding of an explanation for an effect "induction," and so I suppose that this is the place to discuss the subject. I would rather restrict induction to deriving somehow statements about every instance of a given type of thing based on observation of only a few instances of that thing.

Let me make a bridge between this section on observation and hypothesis and the next on experiment and verification by treating the problem of induction as an example of a scientific effect, and giving some hypotheses that have been advanced to account for it; and then giving what I think is the cause of it--which, interestingly, is the facts about effects and causes.

The effect is this. We know that it's silly to question whether the next instance of hydrogen we find will combine with oxygen to form water, because we know that every instance of hydrogen will do this. But obviously, we haven't observed every instance of hydrogen; and so based on our observation, it would seem we have no grounds for saying this will happen every time. It looks like a case of reasoning from the indefinite to the definite, which is illogical. On the other hand, it obviously works, and in fact is underneath all logic (as Aristotle himself saw), because deductive logic starts from "universal" statements. There is some kind of reasoning going on here, because we are making statements about what we have not seen based on what we have seen; and the only way you can do that is to reason to them. But how can reason be illogical? How can you go beyond your evidence?

Deductively, you can go beyond your evidence, because the conclusion is implied in it. Then inductively the conclusion must be implied in the evidence also. But how can all instances be implied by just some?

That's the effect. Now one hypothesis is that of Hume, who simply says, "We can't actually get at what happens every time." If things have happened invariably in the past, we expect them to happen again, and the more often they have happened, the more we tend to think they always happen this way. And that's how we get our general statements, according to him. There's no logic behind it; any statement like, "Hydrogen combines with oxygen to form water" means nothing more than, "All the hydrogen I have seen so far has combined with oxygen to form water."

To test this, we want to know, remember, whether it makes sense in itself, and whether it makes sense out of what we have observed. First, does it make sense in itself? I don't see how it can. Presumably, Hume came to his generalization about inductions based on some observations. Hence, all his hypothesis amounts to, on his showing, is his saying, "All the inductions I have seen so far have been only summations of the past." Why he then expects others to listen to him when he is obviously predicting that this will be the case for others is beyond me.

Secondly, it does not allow us to distinguish between Arthur Pap's "lawlike generalizations" that I spoke of in Section 2 of this part and invariable occurrences where we find no grounds for predicting the future. It may be that a person has lived to be thirty years old and has not yet moved out of his parents home; and every day for the past fifteen years, he has come back at night to this house. Would he then say, "For my whole life long I will come back to this house at night," as if it were some universal law like hydrogen and oxygen? He may expect to go back there tonight; but this expectation is very different from the expectation that the next batch of hydrogen will combine with oxygen to form water. "We've always done it this way," is something those who have formed habits complain to the innovator--who then answers, "Is that any reason to keep doing it?" when he shows them a more efficient way. Yet we precisely think reason says that we can't live forever, because every human being dies.

Besides, if a person mixes a gas from a bottle labeled "hydrogen" with one labeled "oxygen" and passes a spark through the mixture and the result is a pink solid rather than water, he wouldn't say, "Well, now, not every instance of hydrogen combines with oxygen to form water." He would say, "Somebody mislabeled one of these bottles," and would test them, confident that that hypothesis would be the one to be verified--and, let's face it, it would be.

So this hypothesis is just plain silly. You can make a little more sense out of it (but not much) if you say that, on being confronted with an invariable occurrence, you then define the object that is behaving invariably as "That which performs this particular act in these circumstances." Obviously, then, every case of the object so defined will act in the way in question. So, for example, you observe a lot of instances of hydrogen combining with oxygen to form water. You then define "hydrogen" to be "whatever it is that combines with oxygen to form water" (in fact, the name is Greek for "water-former"); and clearly if something combines with oxygen to form water, it is hydrogen, and if it doesn't it isn't. Your "universal" is now established.

But that won't work either, because it will now be like a mathematical object and have one and only one property. That is, if the behavior of hydrogen with oxygen and its results were invariable solely because you chose to define the substance based on this behavior, then you would have no grounds for talking about any other invariant behavior of hydrogen--such as the lines of its spectrum when excited, what it does with sulfur to form that gas that smells like rotten eggs, how it gets involved in acids, etc., etc.

What I am saying is that if you know hydrogen always combines with oxygen to form water because you defined it to be "whatever does this," then how do you know that this same thing also combines with sulfur to form hydrogen sulfide? You can't simply define it to do so, because you don't know whether both definitions will go together all the time.

"Well, they do go together, so why not make the definition, 'whatever combines with oxygen to form water and combines with sulfur to form hydrogen sulfide'"? Because (a) you are leaving open the possibility that you might find something combining with oxygen to form water which was not hydrogen (because it formed a pink solid instead of that gas when it combined with sulfur)--and you know that that won't happen--and (b) think of all the properties of hydrogen that scientists have discovered. The more you get, the more behaviors you would have to add to your arbitrary definition, making it that much more unlikely (if that was the sole basis for the generalization) that you'd find many objects with all the behaviors together, just by coincidence. No, it's only by induction that we know that the same stuff that combines with oxygen to form water also combines in this particular way with sulfur and has this particular spectrum when not excited and this other one when excited, and so on. So that hypothesis does not pass the experiment.

Some philosophers, like Rudolf Carnap, have regarded induction as an application of probability. You observe a number of instances (a sample) and argue from there to the whole population, by the use of statistics.

Clearly, we do make use of statistics; that's what pollsters do when predicting elections, and what insurance people do in deciding how much to charge for insurance, and so on. But every statistician knows that your statistics depend on your having a representative sample of the whole population when you make your observations. If you want to predict an election, you make sure that you don't just see Republicans, or the people who work for the League of Women Voters; the sample has to reflect the whole and the conditions in which the whole is expected to act. To the extent that you aren't sure if your sample is representative, to that extent your statement about the whole population is shakier.

Now the problem with this is that the generalizations we are most certain of are the ones where we have the least representative samples. After all, the only hydrogen we have observed combining with oxygen to form water is hydrogen on the surface of the earth--and under the special conditions of the laboratory at that. But hydrogen is the most abundant element in the universe, and is found mainly in stars and interstellar clouds. So we have observed the behavior, on a conservative estimate, of a billion billionth of the whole population, and under conditions totally unlike for practical purposes a hundred per cent of it. To call our sample "representative" of all the hydrogen there is would be like asking two people in New England what their favorite food was and concluding that everyone in the world including the Chinese was inordinately fond of baked beans and codfish ("scrod," if you want to be really Bostonian).

Based on statistics and probability, then, it is unlikely to the highest degree that every instance of hydrogen would form water when combined with oxygen. Yet no one in his right mind would say that it is problematic that hydrogen does this.

And all the inductions we make are basically like this, except the ones that are specifically statistical, like generalizations about automobile accidents on holiday weekends. How, for instance, do you know you have a brain, and aren't, like the Scarecrow in Oz, bereft of one? The only people we've seen with brains inside their skulls have been people who have been very sick or injured, after all; and that has been a very small proportion of the population. Again, based on this, it is highly improbable that you have a brain.

Clearly, this theory is no better than the others. We will discuss statistics later, and show when it is applicable and why it is applicable; but the point here is that it is not the explanation of how we can make inductions.

Well then, how do we do it?

My theory is that we first observe enough instances of constant behavior that we become curious as to whether this is coincidence or something forcing the constancy. That is, the constancy is first seen as an effect.

We then hypothesize that the constancy of the behavior is caused by the structure of what is behaving (its "nature," if you will recall our definition of the term from Chapter 4 of Section 2 of the second part) 2.2.4.

We then examine the object in question to find out if there is something about it that would allow us to predict the behavior in question; and insofar as that aspect of the being's structure is part of its essence, then we say that the object, just because it is this kind of object, behaves in the way in question under the proper circumstances.

Thus, we find water when we combine hydrogen with oxygen. One or two instances are enough to show a scientist that this is unlikely to be coincidence.

He then hypothesizes that the behavior is due to the structure of hydrogen (and of oxygen, of course). Examining hydrogen(2)

we find that the atom has only one electron, in a "shell" that can hold two; while oxygen has two "holes" in its outer shell. Two atoms of hydrogen would fill up these holes; and the results of analyzing water into hydrogen and oxygen confirm that there are two hydrogen atoms in water and one oxygen atom.

Hence, it is because of the nature of hydrogen that water results from what it does with oxygen; and in that case, hydrogen, to the extent that it is hydrogen, will combine with oxygen to form water. Voilà.

Now of course, these "universal" generalizations are compatible with variations. For instance, heavy hydrogen (which has a neutron as well as a proton) will form heavy water, which, among other things, is radioactive, while ordinary water isn't. There is hydrogen peroxide, which has two atoms of oxygen bound to the hydrogen, and doesn't behave like ordinary water--and so on.

We recognize that inductions give us general truths, not necessarily "universal" ones in the sense that they take in absolutely every instance; but they are generalizations based on the nature of the thing in question, and are by no means arbitrary. This is why they support "counterfactual conditionals(3)" and don't lose their force.

So yes, we can say that every human being can see, even if we recognize that some human beings are blind. "Every human being can see" means, "Every human being is a seeing kind of thing," or why do we have eyes? But not every human being actually can see, because there are defective natures.

So it is effect and cause and actual investigation of the structures of things which allows us to make inductions, and it isn't either an illogical leap or something that belongs in logic, because logic is not directly founded on the nature of reality but on the nature (the structure) of the way we speak about reality.


Notes

1. For instance, those professors who give grades based on 100 points, and delude students into thinking that 83.2 means something. Or take IQ scores, where a 10-point difference is almost universally thought to have some significance.

2. The examination in this case would not be a strict observation, but is actually how hydrogen fits into the whole of the atomic theory of substances, which explains so much that it would be fantastic to think that it was radically wrong.

3. Just to refresh you, the form of this is, "If this case of hydrogen does not combine with oxygen to form water (the 'counterfactual conditional') it is still true nevertheless that hydrogen in general combines with oxygen to form water." A counter-instance of a summation that is not an induction destroys the generalization, which simply becomes "most of the time (so far, at least)."

Chapter 3

Experiment

The experiment is the initial test you give to show that your hypothesis is correct. Technically, it should be a test to show that the explanation you have guessed at does in fact imply the observations you have made so far; but in practice it functions not only to do this but to do what the "verification" mode does: it treats the hypothesis as a little theory, looks at what is implied in it, including what has not so far been observed, and checks to see if those things (which must be facts if it is true) turn out to be the case.

It is sad, in a way, that all of the exciting part of a scientific investigation has by this stage already gone by, and most of what happens from here on in is drudgery--and in fact a kind of dogged attempt to prove that your hypothesis or theory is wrong. The reason is that the theory is of the form "p implies q," as I said, where "p" is the explanation you hope or think is the cause,(1) and "q" is the observed effect that depends on it (or the predictions that follow from it). But by the logic of the hypothetical syllogism, nothing follows from knowing the truth of the consequent, and the only thing there is in the real world is the consequent; your hypothesis is a situation you made up. Even if it is a real situation, you still made it up insofar as it explains the effect in question.

Hence, there is no way to verify the hypothesis. You can falsify it by showing that something that is implied by it is not in fact the case, because the false consequent either means that the "p" doesn't exist, or that it's not connected to "q" by way of implication.

It isn't quite as cut-and-dried as this, however. In an extra-logical sense, insofar as other explanations than the one you have chosen are unlikely or impossible, the one you picked gains in credibility; and in the limit, has to be

true if it is the only explanation that makes sense. As Sherlock Holmes said somewhere, "When you have ruled out every other explanation, my dear Watson, the one remaining, however improbable, must be the truth."

There is one kind of experiment, called the "gedanken experiment" or "thought experiment," that deserves mentioning. This is one, as the name implies, that isn't actually performed, because the conditions for its performance either aren't actually possible or are so obvious as not to need bothering with. In either case, it is dangerous; in the latter because reality can sometimes be capricious and not behave the way you are convinced it will behave; and in the former because the conditions that make the actual experiment impossible are apt to be extremes, and bodies do strange things in extreme conditions (as witness the surprise of scientists who cooled objects down near absolute zero and found that they suddenly became superconducting).

I can give an illustration of a thought experiment if I treat the other topic I said I would discuss: that of probability and statistics.

The effect connected with probability is that probability deals with what is random, and yet it provides laws governing the random behavior. But "laws" are descriptions of constant, invariant behavior, and what is random is precisely what is not constant. How can there be constant inconstancy?

The first hypothesis you might offer for this is that the behavior is not really random, but only seems so; and probability reveals its non-randomness. But this won't really work. If you take one of a pair of dice (to make things simple) and roll it and you get an ace every time, then you examine the die to find out if it was weighted on the side opposite the one-spot, or if the edges were rounded, or was in some other way altered making it not behave randomly. The laws of probability will not work if something is favoring one side over the other as coming out on top; there must be an equal chance for every side to come up each time.(2)

When this is the case, then you can say that in the long run, the die will show an ace one-sixth of the time. Let me explain "in the long run." As Bernard Lonergan mentions in Insight somewhere, it means that there is no systematic divergence from the ratio in question--in this case, between the number of throws and the number of times the one-spot appears on top. There may very well be a "run" of some one side's showing up more often than a sixth of the time; but it will be counterbalanced at some other time by that side's appearing less frequently than the law predicts--and of course, these balances will be random also. The result is that as the number of rolls of the die becomes quite large, the number of times each side appears on top will be closer and closer to the number in the probability ratio.

But why is that ratio one-sixth with the die? Is this connected with the die's having six faces, only one of which can appear on top at any one roll? Suppose we make this our hypothesis. The fact that the die has six faces only one of which can show up on top causes its behavior to be such that a given one will appear on top a sixth of the time.

And here is our thought experiment. We find that with a coin which has two faces, heads will come up half the time. (You can see why this experiment doesn't have to be performed; it's already been done often enough.) And with a dodecahedron, any given side will come up a twelfth of the time, and so on.

We can now formulate a more refined hypothesis: It is the constancy of the structure underlying the random behavior that forces the behavior not to be totally random. This solves the basic effect. It isn't that the behavior is not random; there is nothing that picks which side will come up at any given roll of the die. Still, it isn't the randomness itself which is lawful, but the fact that constraints are placed on it by the structure of what is behaving randomly; and these constraints prevent totally random behavior, leading to the probability ratio between what appears on top and the number of rolls.

If this is true, then the laws of probability are not "laws of chance," but the laws of something constant that prevents chance from being complete randomness.

Let us test this hypothesis with a thought experiment. Imagine now that you have a "die" made of soft plastic, which will be deformed as it hits the table you are rolling it on. You place a spot on it somewhere, and then roll it many times randomly; and at each roll, it ends up having a different number of "faces" from what it had in the last roll from one (a sphere or oval), two (a lens), up to infinity (which would again be something like a sphere, and so would equal one). Now, what will be the ratio of the spot's coming on top to the number of rolls? There is no answer, because now everything connected with the rolls is random.

We could test it again by another experiment. Suppose your die was such that at any given throw it could have 4, 5, 6, 7, or 8 faces, but no others. Would the laws of probability apply? Yes. Without trying to figure out the actual ratio (I am terrible at applied mathematics), one time out of five the die will have four faces, and the probability of the face you are interested in coming out on top would be one in four during those times. One time out of five, it will have five faces, and the probability during these times will be one in five--and so on. If you combine all of these according to the laws of probability, you will come up with a number. Again, there is a constraint on absolute randomness because of the constant underlying structure; only in this case, the structure is whatever always keeps the die from having more than this set of numbers of faces.

So our problem is solved and reason is once again vindicated. Now we can state the theory explaining why probability works.

Theory: The laws of probability are due to the fact that some kind of constant structure behind what behaves randomly prevents the behavior from being completely random.

But there are a couple of things to note here. There is no logical necessity (as mathematicians seem to think) between, say, the fact that there are six faces on a die and only one can come up at any given roll and the prediction that in the long run the die will show an ace a sixth of the time. It "stands to reason" that this would be the case, but there are a lot of things that "stand to reason" that aren't true. It "stands to reason" that a ten-pound weight will fall down faster than a two-ounce weight; but you won't find it doing this if you discount air resistance and so on.

There is no reason why, even if the die had only six faces all the time, the one-spot's appearance couldn't in fact be totally random; you wouldn't expect it to be, under these circumstances, but there's nothing that makes it a contradiction for it not to be. After all, the ratio predicted is a ratio between the number of events of a certain type and the total number of events, and the ratio you have discovered is a ratio between actualities and possibilities for a single event. And possibilities are just possibilities; there is no necessity for all possibilities eventually to be realized, any more than the fact that a man can have sex means that he can't be celibate forever, or the fact that you could be a philosopher means that you eventually have to be one.

This lays bare the silliness of people who say that if you put enough monkeys banging away at enough typewriters, one of them would eventually type out the whole script of Hamlet, just because one of them could, by chance, do it. The reason it is silly is that it is also possible that this particular combination of letters would never be hit on by anyone (because at any given try, it is possible both to type out the script and not to do so); and so if all possibilities must be realized given an infinite number of tries, then it will eventually be true both that some monkey will do it and no monkey will do it.

So just because it seems reasonable to say that a structural constraint would lead to a probability ratio among behaviors, it isn't positively unreasonable to say that then again it might not. But it turns out that in practice, the theory works.

That is, people have tested it, and found that in the long run dice do behave as probability says they will (which is what keeps casinos in business); and so what "stands to reason" also turns out to be a law of nature. Hence, the laws of probability are basically empirical laws, not strictly mathematical ones. That is, the mathematics prescinds from what actually goes on in the world; but the fact that it applies to the world has to be empirically verified.

Note, by the way that there is a "law" that also "stands to reason" which in fact isn't verified: the "law of averages." It reasons this way: "Heads on this coin has to come up half the time in the long run. There have just been twenty heads in a row. Therefore, compensation must set in, and in the future it is more than a fifty-fifty chance that tails will come up."

Many is the man who has lost his shirt based on this fallacy. True, in the long run, the probability ratio has to obtain; and this is predictive for the total number of flips of the coin, if that number is very large. But it has no predictive value for the next flip. Why? The answer usually given is that the coin doesn't know that it's had a run of twenty heads. True, but it doesn't know either about the total number of flips, and why does it work out with the total number and not with the next one?

That is, if the odds against getting twenty heads in a row are, say, a thousand to one, the odds against getting twenty-one heads in a row are even greater--let us say ten thousand to one. Once again I am just putting in figures, but the principle is valid. Then why can't you bet using the much smaller probability based on the twenty-one in a row?

The "answer" is that most of the "unlikelihood" of the twenty-one heads has been used up in the twenty heads in a row. And, the probability theorists tell us (and it is verified again in casinos every day), the likelihood left over for the twenty-first flip after the twenty heads in a row is just exactly fifty-fifty. If you take all twenty-one together, it is enormously unlikely to happen; but if you take twenty-one after having twenty, it is a tossup. Sorry. There is no "law of averages," but there are laws of probability. But note that this is due to the fact that this is the way things actually work; there is no special reason why it can't be the case that a long run of one possibility will not be compensated for in the near future.

So what I am saying is that the universe is so built that the laws of probability work, and the law of averages doesn't.

Note that if this theory of the foundations of probability is true, those who say that the world evolved "just by chance" are dead wrong, given that the laws of probability govern evolution. If it came about just by chance, then there would be no way to apply these laws. No, once the laws of probability operate, they are laws of some nature that prevents the behavior from being totally random; and so evolution as a process is precisely not due "just" to chance but to what it is about the evolving universe that (a) enables it to perform a certain range of behaviors, (b) prevents it from doing anything outside that range, and (c) doesn't pick out which behavior in that range is going to occur at any given time.

The chance element, therefore, is only one out of three necessary conditions for evolution to occur. If the first weren't there, obviously nothing would happen. If the second weren't there, there would be no predictability at all about what had happened and what will happen. Of course, if the third weren't there, then evolution would be totally predictable, à la Laplace's discredited view, that if we knew absolutely all about the motion of one single particle, we would know the whole past history of the universe and be able to predict everything that will happen in the future.

What I am saying here is that there is no way dogs can evolve into jellyfish (at least I presume that no matter how much a bitch's genes are interfered with, it simply is not possible for her to give birth to a jellyfish). And animals evolved from other animals because chance alterations in the genetic structure were such that the resulting organism could still live and survive--which is a tall order, given how tenuous our hold on this super-high energy level is. So the genetic structure of any organism exercises constraints on what can come from it (in fact, it normally excludes anything but the same form of life, as we saw), as well as making it possible for some new living body to come from it. Just imagine a stage of evolution that resulted in the next generation's being sterile like mules. End of evolution.

Because probability (and its inverse, statistics) plays such a large role in our lives and in science now, people have been mesmerized by the chance element of it, and said that because of this there is no such thing as a "nature" any more, and everything is just random. But probability proves "natures," it doesn't deny them. It is just that the natures don't directly constrain the action to be one single, inflexible act every time; they constrain the acts, enabling several but no more than a fixed number of possible acts.

One final remark about probability, and then I will talk about statistics. Probability doesn't really have anything to do with likelihood as opposed to certainty. If you recall our discussion of certainty back in Chapter 5 of Section 1 of the first part 1.1.5, I said that certainty and likelihood had nothing to do with probability in the sense of the "laws of probability." (Incidentally, I said there that I would discuss probability "much later." You had no idea how much later, did you?)

Certainty is the knowledge that you are not mistaken, and is the lack of evidence against what you think is true, coupled with some evidence for it. Likelihood (which implies doubt) supposes that there are reasons for saying that what you think is true might not in fact be true; but the reasons for saying that it is true outweigh the reasons against it.

But probability doesn't deal with this. First of all, the laws of probability are certain (given their empirical verification), not likely. There is reason for saying that they are true, and no reason and no experience which would say that they are not true.

But they don't deal with reasons for saying that a given event is a fact; they only deal with the relation between a given actualization and the total number of tries; and that relation is certain, not likely.

You can say that it's fifty per cent likely that your coin will come up heads; but that doesn't really mean more than that there are two possibilities only one of which can be actualized; and you are certain of that. It has no real predictive value for what will happen on the next flip. On the next flip either heads will come up or it won't; and it doesn't make sense to say "half the time" it will come up, because you are talking about this definite flip, not a number of them.

Hence, probability should not be confused with likelihood. It's all right to talk about a "sixty per cent chance of rain tomorrow" in a kind of loose sense, informing the public that there is a weather situation that allows something corresponding to a hundred possibilities with sixty of them rain, and let them figure out whether the likelihood of rain (it is likely) means that they should take their umbrellas. What I am saying is not that probabilities don't generate likelihoods in people's minds; it is just that the likelihood, strictly speaking, doesn't have a number attachable to it corresponding to the probability ratio. In one sense, if something has a sixty per cent chance of happening, there is reason to expect it; but it would be hard to say that there are sixty reasons out of a hundred for expecting it.

Statistics, then, as the inverse of probability works this way: First, the scientist notices some correlation between events and the objects involved in the events, and suspects (as we saw in induction) that this is not a chance correlation. Smokers take smoke into their lungs, and there seem to be a lot of lung-cancer patients who are smokers.

The observation then establishes the correlation itself: that indeed the population of smokers is over-represented in the population of lung-cancer victims. That is, the percentage of smokers to the general population is, let us say, one in twenty. But the percentage of smokers with lung cancer to the general population of lung-cancer victims is one in ten. These are figures I am making up just to give you the idea.

For the statistician now to assure himself that this correlation is not chance, he must formulate a hypothesis that there is something in the nature of the object in question to allow one to expect the behavior observed.

This step is crucial. If it can't be done, then there's no reason for expecting probability to be at work here; and pure chance can come up with correlations that have no foundation. People, they say, have found very high correlations between such things as the number of reports of hearing the mating call of the male caribou in Washington State and the number of immigrants into the Port of New York. In fact, what the tobacco companies have been arguing for years is that the correlation between smoking and lung cancer is like this.

But of course, it stands to reason that if you take a substance known to be toxic into your lungs, it won't do your lungs any good; and experiments with animals show that the things in tobacco tar produce cancers when rubbed on animals or forced into their lungs. This is part of the experiment stage: to find what it is about the nature in question that produces the constraint on events that causes the probabilistic correlation.

The other experimental test of the hypothesis consists in showing that the same correlation stands up constantly. For instance, there are fewer smokers now than there were twenty years ago, and more lung cancer victims than there were twenty years ago. But it is still true that the smokers are over-represented in the population of lung-cancer victims. That is, today, let us say, the ratio of smokers to the general population is one in a hundred; but the ratio of smokers with lung cancer to the general population of lung-cancer victims is one in fifty. There are still twice as many smokers in the lung-cancer group as there are in the general population. The only thing the increase in lung-cancer victims and the decrease in smokers proves is that there are more things that give people lung cancer nowadays than there used to be.

But this shows why it is important when you use statistics to know what is behind the correlation, so that you can isolate the correlation from all the extraneous factors that have nothing to do with what you are focusing on.

Generally speaking, when you are dealing with statistics, the cause of the effect in question (the correlation) is some very abstract property of a number of different things. For instance, the cause of lung cancer is "a carcinogen taken into lungs that can't overcome it." But there are all kinds of different substances that are carcinogenic and can find their way into people's lungs, and there are, presumably, all kinds of levels of resistance to the activity of various carcinogens. Hence, you would be able to predict from this situation that you couldn't set up a one-for-one correspondence between getting cigarette smoke into your lungs and getting lung cancer (the way you can say that having your head removed is invariably fatal); the relationship is bound to be probabilistic. You have found the nature; but the nature allows several different behaviors, but only a limited range of them.

Theory: The use of statistics is valid when the user knows that there is something about the nature of what has a correlation attached to it that (a) allows several different behaviors, but (b) constrains them to be only these several behaviors.


Notes

1. The cause is the real explanation, you will remember.

2. If this still leaves some randomness, probability can take the weighting into account.



Chapter 4

Theory and verification

Once the hypothesis has passed the test of the experiment, it is no longer called a "hypothesis" but a "theory." The word "theory" comes from the Greek theoría, which means "a looking at" or in English, "a way of looking at things." So it's not something presupposed any more, but it makes up part of our attitude toward the world.

The idea behind this is that a theory is to be accepted, absent evidence to the contrary. Why? Because something doesn't make sense without it, and makes sense with it. If you didn't have a reason for rejecting it, then you would be accepting the world as unreasonable. The theory may not be true, because the experiment, as I said, hasn't been able to verify it (in the sense of "prove it true"), but just has failed to falsify it; still, the fact that it hasn't been verified is no reason for rejecting it, because if you don't have any reason other than that it hasn't been verified, your rejection means the acceptance of the unexplained effect, or the acceptance of a contradiction for which you have no resolution (or for which you have an untested "explanation," which amounts to the same thing).

But as usual, things are not so simple. There is, for instance, the Ptolmaic theory of the earth-centered universe with the heavenly bodies circling in spheres and epicycles around it, which hasn't really been falsified, since if you wanted to, you could fix it up to fit in with all the observations up to the present. Interestingly, Newton's view of the universe has been falsified, as we will see shortly in discussing predictions. But there is Einstein's finite but unbounded universe, which in itself sounds even more bizarre than Ptolemy's. Why is one accepted and the other rejected?

The answer is that there are three basic checks on a theory, which don't prove it, but allow you to choose between competing ones that fit the facts so far observed; they are simplicity, logic, and comprehensiveness.

The simplicity of a scientific theory obviously does not mean that it is simple to understand. General Relativity is a simple theory of bodies' motions in space, but you have to know esoteric mathematics like the tensor calculus and a good deal of physics to be able to follow it. The Ptolmaic theory of heavenly bodies is much easier to understand, but it's not simple.

Simplicity is just an application of Occam's Razor, which I have mentioned a couple of times in passing in this book. It is time to see why it is a good canon of a theory. First of all, the notion, formulated by William of Occam, says that a theory is better the fewer things not in evidence it assumes to be true; and the ideal, of course, is none at all or just one. (It's called a "razor," of course, because you "shave off" everything from the cause except what's absolutely necessary for the effect to be what it is.)

Now why should this be a criterion for a good theory? Who says that you're more likely to hit at the correct explanation if you pick one that doesn't have many parts to it rather than one that has a lot of them? We see every day events depending on the convergence of a huge number of other events. Why did the tree in my back yard grow there? Let us say that it was because a squirrel picked up the nut from its parent tree and instead of eating it buried it there, at the edge of but not in my lawn, so that the sapling didn't get mowed down, and it was in soil that was rich from the grass clippings and leaves rotting above it; but it was not so far in the woods that it didn't get light--and so on and so on. The cause of a singular event like this is often a chance concatenation of an enormous number of factors, none of which can be left out. Simple explanations (i.e. explanations that reduce everything to one factor) are in cases like this simplistic explanations; and it is the sign of the fanatic that he doesn't recognize this.

But for this very reason these explanations are not theories in the scientific sense of the term. It isn't, as scientists so often say, that they aren't "repeatable." Theories about the evolution of the universe are not testable by "repeating" evolution, and they are theories (and testable, by the way).

No, the reason lies deeper in what you mean by a theory as an explanation. I stressed at the beginning of this section that science wasn't interested in finding out facts so much as it was in making sense out of the otherwise unintelligible facts that confront it. True, the cause that makes sense out of the unintelligibility will also be a fact; but it is sought not because it is another fact to know, but because it is the fact that makes sense out of the effect.

Now if we look at a complex theory, we will see why it is that a theory is better the simpler it is. Take the Ptolmaic theory of the heavenly bodies. It assumes that each body is on a sphere that is centered on the earth as the center of the universe and is rotating around it. The planets, however, are on little spheres on the surface of their main sphere, and as the main sphere rotates, the little one does also, making the planet move erratically as seen against the background of the sphere on which the stars appear (which of course moves with perfect regularity around the earth once every sidereal day). The different speeds of the spheres and the different distances make them appear in the different positions with respect to each other in the course of the years.

All right, but now what connects them all into a system? How are they interrelated? There is no answer to this in the Ptolmaic view; they just happen to be arranged in such a way that the appearances are what they are. This is one of the reasons why it is so easy to fix up this view to fit new observations; if Mercury, for instance, is in a position slightly different from what past and less accurate observations would lead you to expect, you just adjust the size of Mercury's sphere or its epicycle or its distance until the motion fits the observation. If stars are discovered to move with respect to each other, you put them on different epicycles within the sphere of the stars; if astronauts go through these spheres on the way to other planets, then you just make the spheres penetrable--force-spheres, not bodies of crystal. And so on.

So ultimately your explanation of the motions of the heavenly bodies is "they're just arranged this way, that's all." But that's no explanation. As you can see from the discussion of probability, chance cannot be a cause. Insofar as the factors in some complex theory, therefore, are not connected, then they just happen to be together by chance; and insofar as they are together by chance, the explanation is no explanation at all. It's like what the medievals were accused of saying (the serious ones didn't): "it's the will of God" when confronted with strange and anomalous events. Since that was the "explanation" of anything and everything, it is the explanation of nothing.

But then when Newton developed his Theory of Universal Gravitation, all you needed to assume was two things: (a) that bodies were attracted to each other proportionally to the product of their masses and inversely as the square of the distance between the centers, and (b) there was an initial tangential velocity (i.e. one at right angles to the line between the centers) that was great enough to prevent their falling into each other.

As to this second point, you don't even have to assume some "centrifugal force" as a separate force. If you throw a ball parallel to the surface of the earth, it will curve downward in a steadily increasing arc (a parabola, if you're interested). The harder you throw it, the farther it will go (the shallower the arc) before it hits the ground.

Now suppose you are standing on the summit of Mount Everest so that there are no obstructions ahead of you at this height anywhere in the world (another thought-experiment, notice), and you throw the ball very hard straight out. It will curve downward toward the earth. But the earth itself is curved; and so if you throw it hard enough, the arc it is traveling in toward the earth might be shallower than the curvature of the earth, in which case it will miss the earth and continue on around it, and eventually wind up hitting you on the back of the head. Now of course, to make this work, you had better be on the moon where there's no air rather than on the earth, and you had better develop your pitching arm rather thoroughly; but I think you can see the principle. Given an initial tangential motion of the proper speed ("orbital velocity"), then the single force that makes bodies fall down keeps satellites up. Any speed beyond this just changes the shape of the orbit until "escape velocity" is reached. But that's another story.

Now this one force of gravity ties all the planets together into a single solar system around the sun, explains why the orbits aren't circles but ellipses (that's what depends on the initial speed), explains what the sun itself is doing inside the galaxy we call the "milky way," explains the shape of that and other galaxies, and explains systems of stars and galaxies. About the only thing it doesn't explain is why all the galaxies are moving away from each other (except the ones locked into a system like the milky way and our little companion that can only be seen from the southern hemisphere); and for this an initial explosion (the "big bang") had to be added.

Now there is an explanation. Assume just this one fact, that there is a force of attraction between bodies due to their mass, and falling bodies make sense, orbiting satellites make sense, planetary systems (with satellites like the moon around the planets) make sense, galaxies make sense.

Unfortunately, it's the wrong explanation, as we'll see. But you can understand why this is a theory that, absent evidence to the contrary, is to be preferred as an explanation to Ptolemy's. It actually explains; Ptolemy's doesn't.

So the reason a simple theory is preferable to one that assumes more in evidence is not really because it is "truer" by that fact; it is because the more complex one relies on coincidence among its parts, and coincidence precisely doesn't explain.

Now of course, theories can have complex parts if they can show what the relation is among them and don't just have them working together by chance. But of course, in that case what connects them is the true cause; and so even if the theory has complex parts, it's still basically a simple theory, because ultimately it rests on the one fact which connects all the parts.

If you look at this theory of science of mine, you will see that it rests on the one fact that scientists know that contradictions don't really occur, and yet they find evidence of contradictions in what confronts them (effects). Given that one fact, everything else follows: observation, hypothesis, experiment, theory, and (as we will see) verification. So this theory of science is a simple theory of science, even to explaining why simplicity is a criterion of a good scientific theory.

Now of course, the criterion that the theory has to be logical simply means that you should be able to deduce all of the otherwise contradictory effects from the cause by the "p implies q" type of reasoning, where "p" is your statement of what the cause is, and "q" is the event in question. For instance, if theories are supposed to be explanations of events that are otherwise contradictory, then it follows logically that simplicity in the sense discussed above would have to be a criterion of a good theory. And it has been recognized as one, by people who knew it worked, but didn't know why.

The third criterion is actually connected with the second; it is the criterion that the theory has to be comprehensive. What that means is that the theory has to explain all of the aspects of the problem in question, or it fails as a theory. If one tiny part of the effect remains unexplained by the theory, then the theory can't be stating the cause of the effect, because part of the world remains self-contradictory under it, and the theory's whole purpose is to show how the world, assuming it, is not self-contradictory. (Once again, notice that it is effect and cause that explain why this criterion is a criterion of a good theory.)

It does not matter how insignificant this aspect of the world is; if it is such that the theory has to make sense out of it, and the theory doesn't, then the theory is wrong. We have seen any number of examples of this in the course of this book. To take just one that comes to mind, Skinner's supposedly "scientific" theory of why we think we're free when we're not, that we aren't aware of what's forcing us to choose and do something. But, as I mentioned, that would mean that compulsives, who fit the antecedent, would then have to feel free, and they don't. Hence, his theory doesn't explain something that it has to explain, and so it must be rejected.

Or take another famous case, that of the Newton's Theory of Universal Gravitation, which was supposed to explain the orbits of the planets, including their shapes and so on. One of the things his theory does is say that when there is something like the sun that is basically determining the orbit around it of Mercury, say, and there are also the other planets pulling on Mercury from outside, even though they are moving around the sun themselves (and so are in different positions at different times), the orbit of Mercury will "precess" due to these "perturbations." Precession is what a spinning top does as it begins to slow down and the whole top as it tips begins moving around in a circle.

Imagine Mercury's orbit, then, as an oval, with the sun offset toward one end of it. The end nearest the sun is called the "perihelion," because the Greek word means "nearest the sun." Now if you imagine the whole orbit moving in a circle around the sun (i.e. with the perihelion point moving in a circle around it), you get the basic theoretical picture of precession. Of course, the actual motion of Mercury is like that child's toy of many years ago the "Spirograph," where a pen traced intricate patterns by being attached to intermeshing circular gears; but that need not worry us.

Obviously, to figure out what the precession of Mercury would have to be due to the presence of the orbiting earth, and also of Venus, and Mars and Jupiter and so on was no small undertaking; but it was done and it agreed with the observations on Mercury--until the beginning of this century, when more accurate instruments and calculations showed that Newton's view of how much the precession should be was off by a matter of (as I recall) four seconds of arc per century. To make this intelligible, an arc of 90 degrees is an arc which is the part of the circumference of a circle cut off by radii which make an angle of 90 degrees at the center of the circle. An arc, then, of one degree is one three hundred sixtieth of the circumference; and arc of one minute is a sixtieth of this, and an arc of one second is a sixtieth of that. The precession of Mercury's orbit was off by four of those per century. Not, you would say, enough to amount to a hill of beans.

Nevertheless, it was enough to destroy the theory. Newton said that Mercury had to be here today, if his theory was true; but Mercury was over there, a few yards off in the millions of miles of its orbit. People checked and rechecked, and couldn't make the observations agree with the calculated position, and couldn't make the calculations come out different. Something that was supposed to be explained couldn't be explained. "(p implies q) and not q implies not p."

Einstein then came along with a different notion (warping of space-time instead of a force of attraction) and explained all that Newton explained plus the location of Mercury which Newton's theory couldn't explain; and that's why Einstein's view is held and Newton's isn't. Einstein's (as far as we now know) is comprehensive; it explains all that it's supposed to explain. Newton's, for all its simplicity and elegance, isn't; and so it's just wrong.

Now connected with this notion of comprehensiveness is that it almost inevitably results in predictions from the theory, and gives rise to the final step of scientific method, that of verification, which as always is at best "non-falsification."

Actually, the problem that destroyed Newton's theory could be called a falsified prediction. But to see why, we have to see what the basis of predictions is. And once again, the notion of effect and cause gives the explanation.

It is in practice impossible for your initial observation to take in every aspect of the effect in question, especially if it is an effect of any generality at all, such as the effect connected with the fact that bodies fall down at a constant acceleration. Hence, the explanation you come up with in your hypothesis, if it is really the cause of the effect, will, of course explain all the aspects of the true effect, not just the ones you happened to have seen and which piqued your curiosity; and not even just the ones you ran across in your careful observation.

Hence your "p" in the "p implies q" will almost inevitably actually have more logical implications than the ones you happen to have observed; and these will be the predictions from your theory. Newton didn't have the orbit of Mercury before him as he developed his theory, I assume; but the theory, as accounting for all motions of all heavenly bodies, would have to include the orbit of Mercury; and so you could predict the orbit from the theory. Unfortunately, it turned out to be different from what the theory predicted, and this destroyed the theory.

Einstein's theory, in fitting the observations of Mercury, also would, of course, apply to the other planets; and so his theory predicted a similar divergence from Newtonian calculations in the orbit of Venus; and this was checked and found to be as Einstein said it would be.

Further, since his theory said basically that bodies left to themselves fall (move with constant acceleration) in straight lines (shortest distance between points); but that in the presence of massive objects, space gets warped out of Euclidian shape, so that straight lines no longer look like what Euclid thought they did, and are sometimes orbits (I kid you not) in Einstein's geometry, then it follows logically from this that anything that travels through space in straight lines (even massless things like light) will be following the weird-shaped straight lines of the new geometry, and from our Euclidian perspective, will travel a curved path.

The theory therefore predicted that during an eclipse, when the sun is dark enough so that you can see the starts behind it, the stars seen near the sun will appear to be in the wrong places, because the light coming to us from them will be bent along the curve around the sun (i.e. the straight line between them and us will be a Euclidian curve). And observations of the starts in the background of an eclipsed sun showed that they were not in the positions we knew them to be, but appeared to be--just where Einstein said they would appear to be. Another prediction verified. This displacement would have to occur if Einstein's theory is true, because it logically follows from the theory.

But of course, the fact that it occurs doesn't prove the theory true, as I have so often said, because nothing follows from "p implies q and q."

Nevertheless, insofar as the predictions predict events that are very unlikely on any other supposition than the theory, the theory is on that much firmer ground. If light has no mass, it can't be attracted by massive objects, it would seem; so why should it be bent around them? But the General Theory of Relativity doesn't suppose a "force" of gravity at all, but just a warping of the geometry of space.

Note that how space gets warped and what it means to have "nothingness" warped is not something Einstein undertakes to answer, and it is his right not to have to. From the fact that space is warped, the rest follows, and so he can start from this as his explanation of what is implied by it, without having to go back to its own explanation. All that means is that, to the extent that the fact he uses as his cause doesn't make sense by itself, he has not got the ultimate explanation.

That is, as I pointed out when discussing effects and causes in Section 2 of the first part, all you need is some fact which is necessary to account for your effect; you don't need to go behind it to the condition (cause of the cause) for the effect. So there is no need to fault Einstein for not explaining how space-time can be warped in the presence of massive bodies. I tried to give some hint of what might be behind this in the discussion of distance, position, and space in Chapter 5 of Section 1 of the second part 1.1.5.

So not every scientific theory has to be "repeatable"; but it would be rare indeed for any theory not to have implications beyond what were the initially observed factors of the effect; and so it is all but inevitable that theories will predict--and that they will predict things that can be checked.

I have tried to show in the course of this book that this applies to philosophical theories as well as to scientific ones. A major reason why I disagree with the theories I disagree with isn't that I don't find them "congenial" to my Weltanschauung, but that I have discovered predictions from them (like the Skinnerian prediction above) that just don't fit the facts. And since I have something of a scientific turn of mind, I can't accept them under those conditions.

My own theory of thinking and reasoning, by the way, predicts that you ought to be able to take every aspect of human mental activity under its umbrella and show how it follows from trying to know relationships among objects (or relationships among relationships among objects) and how we try to reconnect objects so that we can see new relationships that we haven't seen so far.

And up to this point, I have been able to show why there are the different modes of thought of mysticism, logic, mathematics, and now science; and I hope to show in the next chapter how this basic insight also explains art, and in the following one how it takes evaluation into account.

I don't see any reason why we shouldn't draw out the logical consequences of philosophical theories and test them; and if they predict things that aren't so, reject them. That's what this by now immense tome is partly about; the rest of it is an attempt to develop a theory of the world and our place in it (including our knowledge of it) that will predict things that stand up to the test. If you are reading this and I have been dead for a while, that in itself is a prediction from my theory (because it's my ambition that this should be so, and a prediction from Chapter 4 of Section 4 of the third part 3.4.4 is that legitimate ambitions we carry beyond the grave will be fulfilled).(1)

There are only two brief topics left that I want to discuss in this superficial sketch of scientific thinking: why scientists use models, and what a scientific law is.

First of all, models in scientific theory are looked upon as metaphors, and they are really analogies. Metaphors, as we will see in the next section, are esthetic facts, not analogies. When we say that the meadow is smiling, we are not drawing an analogy between the meadow and a smiling face, or there would be some indirect perceptive similarity between the two. Analogies, if you will refer back to Chapter 7 of Section 2 of the first part 1.2.7, are similarities in causes that are known only by the fact that the effects are similar. That is, because the effects are similar, then the theorem that similar effects have analogous causes comes into play--and so you know the fact that the causes are somehow similar, without knowing the points of similarity. Metaphors like the smiling meadow, however, are simply using the emotions as receiving instruments analogous to sense organs (since the emotions do, as we saw in Chapter 5 of Section 2 of the third part 1.2.5, respond to outside energy as well as the state of the body); and so just as "the meadow is green" means "the meadow has in it the cause of my eyes' reacting the same way they do when I look at emeralds and so on," so "the meadow is smiling" means "the meadow has in it the cause of my reacting emotionally the same way as I do when someone smiles at me." So it is silly to examine meadows to try to find where the lips are.

On the other hand, if you happen to know that the "q" from this particular theory looks a lot like the "q" from some other theory, then the theorem of similar effects takes over, and you can argue to some kind of similarity between the "p's" of the two theories. Hence by examining the effects that are similar to the effects of your theory, you can actually learn something about the cause in relation to the cause of those effects.

So, from noticing that the equation of an electron (which we can't observe directly, because it's too small) looks a lot like the equation of motion of a little speck of dust, with things that seem to resemble the three-dimensional translational motion and also the spin of the particle on each of its three axes, we can say that by analogy the electron is like a little particle, and can then talk about its "spin," referring to whatever about it makes its equation look like the spin of a particle.

This does not mean that an electron is a tiny particle, however; because one thing that particles don't do is interfere with one another like waves; but electrons do. That is, the equation of an electron also has some aspects to it that look very much like what happens if you shake a length of rope and the hump moves down the rope, and then you shake it "out of phase" with your initial one, and parts of the hump get bigger and parts get smaller. That's the kind of interference I mean. I talked about it earlier when discussing position in Chapter 5 of Section 1 of the first part 1.1.5.

But particles aren't waves and waves aren't particles. Right. So electrons aren't similar to particles in the sense that they're particles too small for us to see; they're analogous to particles in the sense that there is something in common between electrons and particles, but we don't know exactly what. Whatever it is, it's what is responsible for the similarity in the equations. By the same token, there's an analogy and not a direct similarity between electrons and waves; and obviously whatever it is about the electron that makes it analogous to a wave is compatible with whatever makes it analogous to a particle, even though in the macroscopic world, waves and particles are incompatible. Well yes; but who says that just because waves and particles are incompatible, what is in some unknown way similar to a wave can't be in some other unknown way similar to a particle?

In any case, scientists use models because they are analogies, not metaphors; they aren't sneaking in a little artistry on the side. You can really learn something by studying a model; you can study meadows until you're blue in the face, and you won't learn anything perceptive about smiling faces.

And once more, it is the theory of effect and cause, as developed back in Section 2 of the first part of this book, that explains why scientists are so enamored of models, in spite of the fact that they just look like poetry.

Finally, what is a "scientific law," and why are theories that have been verified called "laws"?

Let's make a definition of a law first.

A scientific law is a description of some invariant relationship.

The difference, then, between a theory and a law is that a law just states a (constant) fact, and a theory is an explanation of an effect. For instance, the law of falling bodies is that in fact no matter what their weight, they all fall to the earth at the rate of 32 feet per second per second. The theory of gravitation explains this by the force of gravity that is proportional to the masses of the bodies and the earth and the inverse of the distance between their centers. To take another example, Boyle's and Charles's laws of gases say that a gas (to oversimplify) increases in volume or pressure 1/273rd for each degree Celsius over zero Celsius. The kinetic-molecular theory of gases says that heat is the speed of molecular motion, and -273o Celsius is the point at which the speed is zero (motion stops). It therefore explains the expansion in that molecules moving faster hit each other (and also the container's walls) harder, and so increase the pressure or the expansion (if it's something like a balloon). It also explains, of course, why the expansion is 1/273rd for every degree.

So the law simply states a fact, while the theory states as a fact something that is the cause of some other fact. Then why do people say that well-verified theories (i.e. theories that fit the three criteria above and have predictions that are true) "become" laws? It is simply that these unobserved explanations are then taken to be facts.

As I said earlier, any theory is to be accepted as a fact if there is no evidence to the contrary; because even though it might be false (and things like what happened to Newton's Theory of Universal Gravitation are always possible--after all, the discovery of the failure of his theory happened centuries after his death), you have no reason for saying it is false, and you have reason for saying that it is true. Hence, if you refer back to Chapter 5 of Section 1 of the first part 1.1.5, you have physical certainty of its truth.

So just because a theory can't be proved true (in the sense that its falseness would be a contradiction), this is no reason for rejecting it when it doesn't fit your lifestyle or is inconvenient on the grounds that "well, it's just a theory." You reject it under pain of condemning yourself to irrationality; and what possible reason could you have for choosing irrationality over rationality?

I have said this often in the course of this book, but it needs saying (at least during the time I am alive) again and again. As I originally wrote this, I that noontime read a review of two feminist books including articles by philosophers who realized that their positions contradicted themselves; but said, "But we have to hold both horns of the dilemma, and simply use which is more convenient to serve women's interests"--after showing that there couldn't even be "the interests of women as a group." By the time you are reading this, I hope this deconstructive aberration will have sunk into the cesspool of repudiated thought where it belongs.


Notes

1. Judging by the lives of people like Richard Wagner, you probably don't even have to be a saint to have them fulfilled, which gives me a good deal of encouragement, even though one of my ambitions happens to be to be a saint. Somehow or other, by the time the end comes, I'm going to stop fighting my Master.



Section 5

Beauty and Art



Chapter 1

Esthetic(1) understanding

I have already said several times what the basis of this section is: that the appreciation of beauty is like ordinary perceptive understanding in that it knows relationships in the world "out there" based on the effects of the objects on our senses; but that it is different from other types of understanding in that the receiving instrument in question is the instinct (with its emotions) rather than some one of the normal receptor organs. Thus, like Kant, I think the esthetic experience should be able to be included in any philosophical view of human consciousness; but unlike Kant, I will be able to say that it is real understanding (not just a sensation), and that it actually gets us at facts about the world--even though they are facts that cannot be translated into perceptive facts and cannot be known by any other means than through the esthetic experience. I mentioned this back in Chapter 7 of Section 5 of the first part 1.5.7, and said that we would discuss it "much later." I think you can agree that it is much later.

Let me say before I start that the object of esthetic understanding is not quite "the beautiful," any more than the object of perceptive understanding is "the good." Beauty, in fact, is to esthetic understanding what goodness is to perceptive understanding; and just as goodness is in the eye of the beholder, so is beauty. But that does not mean that there aren't objective esthetic facts.

But with that teaser, let be launch into the subject with a couple of definitions:

Perceptive understanding is understanding that uses perceptions and/or images as the termini of the relationships it understands.

Esthetic understanding is understanding that uses emotions and/or the emotional overtones of perceptions or images as the termini of the relationships it understands.

A perceptive fact is a fact understood by perceptive understanding.

An esthetic fact is a fact understood by esthetic understanding.




Notes

1. I am going to spell "esthetic" with just an "e" instead of the ae disphthong. It seems to me less pretentious.



Chapter 2

Emotions and objectivity

The major problem in esthetics is the question of whether esthetic knowledge can be objective, and whether or in what sense it gets us at actual facts about the world. Ever since Kant, it has more or less been taken for granted that of course it doesn't, and when artists claim that they are "making a statement" then this itself is taken to be poetry, and if there is any truth in it, the truth of the statement is some kind of "inner" truth about ourselves as perceivers, and certainly isn't on a par with scientific truth. Art is fuzzy, mushy, glorious, outrageous, what have you; but it can't be factual.

Having perpetrated some art of various types,(1) I know that what I was trying to do, at least, was to show people something; and whether they "liked" it or not was as irrelevant to me as whether you "like" this book. The question is whether after reading it you know something you didn't know before; and I, at least, think that that same question is behind what artists do. And if artists think that this is what they are trying to do--and they certainly talk that way--then the theory that explains why they're not wasting their time or lying to themselves and the rest of us has priority over theories that make it all nothing but subjectivity.

So much for self-justification. I think some of it was necessary at the outset, because what I am going to say is the exact opposite of what so many esthetic theories have to say, from the days of Plato on down to the present.

Let me then first discuss the esthetic understanding and see the difficulties connected with objectivity due to the fact that it uses emotions as its "receiving instrument," and how these difficulties can be overcome. Then I will talk about beauty as what esthetically corresponds to goodness, and will give the characteristics of beauty based on its being the object of an esthetic evaluation; and finally I will talk about art as being an esthetic statement, and say a little about the artistic process.

If you remember what was said about emotions in discussing the sense faculty in Chapter 5 of Section 2 of the third part 3.2.5, they are the conscious aspects of the built-in "program" of the brain activating some drive or other. Instinct monitors the state the body is in and links this with the information coming in from the sense organs about the environment; and it is constructed in such a way that comparisons are made, and energy flows into the motor nerves indicated by the drive, while this flow shows up in consciousness as a particular emotion of a certain type and intensity and so on.

Thus, when your blood sugar drops below a certain level, you become hungry and start looking for food (feeling a rather complex emotion, depending on the circumstances, of desire, anxiety, eagerness, and so on); and then when you get it, you feel not only the taste of the food but the emotional satisfaction connected with the accomplishment of the goal of the drive, as your blood sugar rises to the level where the drive turns off.

Notice that when you are hungry, seeing a cooking steak evokes quite a different emotion from the same sight when you have just eaten, which shows that the emotion is not only reacting to the steak, but to the state of the food needs of your body; hence, there is a subjective element in the emotional reaction that is not present in perception (the steak looks the same whether you're hungry or not).

There is also the fact that the drive of which the emotion is the conscious aspect tends toward behavior. In animals, as I mentioned, instinct is the controlling factor, and the behavior is inevitable based on the strongest drive (or some combination of them), with the emotion just a conscious epiphenomenon of it--a kind of superfluous property of the drive itself as it does its work. But precisely because human beings have spiritual acts in addition to sensations, instinct is, to some extent, controllable and can be arrested before the actual behavior, so that we can evaluate the fact that we have the emotion as information on which to base a choice for our action.

It is this arrested state that makes esthetic understanding possible. The first requirement for esthetic understanding, in fact, is precisely this removal of the emotion from its tendency to cause behavior and a keeping it before consciousness as a source of information.

Arthur Schopenhauer made this stage the key point in his The World as Will and Idea. His notion of "will" was derived from Kant, who saw the will as "behind" the phenomena justifying ethics by creating the categorical imperative. Hence, for Kant, the will and freedom were "noumenal," unknowable by understanding (which only correlated phenomena--sensations--into objects--what I would call "perceptions"), but needing to be "postulated" because we can't escape the moral command. If the will was the "noumenon," Schopenhauer argued, then it was the unknown "thing in itself" out there that we couldn't get at by understanding; and since it was "will," it was basically the drive (a kind of cosmic instinct) that (a) created the phenomena out of which we made our objects of experience, and (b) destroyed them, ultimately succeeding when we died and all phenomena ceased. The phenomena and consciousness were created simply to be destroyed; the will was at bottom (from our point of view at least) malicious, cheating us into thinking of this wonderful world only to snatch it all from us. Not what you would call an optimistic philosophy.

There are a couple of ways, according to Schopenhauer, of getting back at this noumenal will, at least temporarily, one of which is by committing suicide--which is what the will wants but before it has had a chance to prolong our torture. The most nasty, however, is esthetic contemplation, because it arrests the will and at the same time doesn't create misery in us, but an intense pleasure that is totally divorced from desire and longing.

Obviously, I don't think much of Schopenhauer's development of one of Kant's errors into a complete Weltanschauung. He has, however, had an enormous influence on the thought of artists of various types after him, such as Richard Wagner. I don't think this is too surprising. Artists, who base their understanding on emotions, tend to handle this sort of thing much better than perception-based understanding; and so they have an aversion for science and cold logic, and tend to think of it as meretricious in comparison with what they understand (just as scientists, for their part, think that artists are playing games, not seriously thinking). But let's face it, we live in a world much of which can only be dealt with in terms of perception-based understanding; and so the artist's experience is apt to be (a) that the things he knows profoundly to be true are sneered at by the philistines, and (b) he needs to deal with the philistines in order to eat. This is a pretty depressing prospect; and it is made more so by the fact that artists have to cultivate their emotions and "let themselves go" to a much greater extent than other people.(2) To the extent that an artistic person found life a struggle against seemingly impossible odds (a very common experience indeed), to that extent Schopenhauer's philosophy of esthetic contemplation's being the only release would ring a responsive bell. But this doesn't mean that it's true.

To get back to where I was, when you stop the emotion before letting it lead to action, you have two possible ways of using it: (1) you can use it as information on which to base a choice; and this is is to consider the emotion in relation to the action it leads to, and is the ethical use of emotions as information. Or (2) you can look on it in relation to what it is "out there" that caused it, in which case you are using the emotion esthetically.

The ethical use of emotions destroys the esthetic function they have, because the emotion then becomes personal and evaluative, and one's interest is in oneself or at any rate in changing the world toward some goal, not simply in learning a fact about the world.

This is, I think, why William Wordsworth called poetry "emotion recollected in tranquility." He realized that if you were under the grip of the emotion, you wrote bad poetry. It is the extremely rare "hortatory" work of art that succeeds as a work of art; because it is rhetoric, not art. Its function is to make you feel an emotion, but to direct that feeling toward some action, not to let you use the feeling to understand something fact by its relation either to some other feeling or to some other object that evokes the same feeling. Understanding facts is not deliberation about actions; and art is about understanding facts.

Hence, in order for the artist to produce art and not rhetoric, there has to be what estheticians now call "esthetic distance." Neither you nor your audience can become so involved that the emotion leads to action, or you or they miss the point (the fact) that is what art is all about.

Here, I might say, is where pornography fails as art. There is nothing in itself wrong with depicting, even vividly, sexual activities (even kinky ones)--as long as the emotions evoked are calculated to be arrestable by the viewer or hearer so that he can see the fact that you are driving at. If the work is so erotic that the viewer is likely to become aroused, then he's not going to be interested in some abstract statement you are making (about the human body, say, or about human relations); he will be thinking about what he could be doing in this context, and won't be able to get your point. Or if the depiction is such that it evokes disgust in most people, then--unless disgust is intended as the emotion that reveals the fact they are calmly to understand--you have failed to get across your fact. It would be analogous to going up to someone and shouting in his ear, "Cause is the true explanation of what would otherwise be contradictory!" The person is so annoyed by the loudness of your voice that he's not going to understand what you are saying. And I must say that shock and disgust are very difficult emotions to evoke and expect people to stop at and simply contemplate the interrelations based on them; they are two emotions which almost inevitably are looked on in relation to action.

I am saying this because as I wrote the original version of this, our city of Cincinnati was recently visited with a "controversial" exhibition of photographs by one Robert Mapplethorpe, who certainly knew how to use a camera. But there were some of them--showing, for instance, a man's arm up another's rectum, a man's finger in another's penis, a man urinating into another's mouth, and so on--in this exhibit entitled "the perfect moment," which definitely caused great feeling among the public--but not a feeling that had anything to do with any "perfect moment." I submit that they were pornographic, not because they depicted sex, not even because they depicted homosexual sex, and not even because they depicted far-out, kinky homosexual sex,(3)

but because you either had to distance yourself so far from any emotion and just pay attention to the technical details like composition, lighting, depiction of skin texture, and so on, that you missed the point (art isn't technique), or you had to be one of that very very small group who could contemplate getting someone's fist shoved up his rectum as a positive experience. And even those people are apt not to look on such a picture as a statement of a fact, but as either anticipating a repeat of the experience or just wallowing in the re-evoked memory of it.

In any case, if Mapplethorpe was trying to teach anyone anything about a "fact of life," he singularly failed, at least in Cincinnati.

Incidentally, the curator of the museum was overjoyed that so many came to see the exhibit--far, far more than had ever come to anything else the museum had shown. But that wasn't because it was art, I submit, because the people were all crowding into the X-rated room, and not paying attention to the other pictures. In the days of hanging, drawing, and quartering (which I will not describe), crowds flocked around the gallows, and raised huge cheers as the hangman-butcher did his grisly thing. Plus ça change, plus la même chose. If audiences are what you want, this is the kind of thing to draw them.

Now it is true, that for a very few select individuals, Mapplethorpe's "controversial" photographs are perhaps not either either disgusting or pornographic--for those who can have the kind of emotions Mapplethorpe himself evidently had, and can have them at a low enough level so that they could see what he was driving at (if anything, of course; it is possible that he made the photographs not because he understood anything new, but either to shock or to use as technically good pornography). But that number is so small, as we will see, that it doesn't make sense to call what is being done "art" and exhibit it to the public, any more than it makes sense to give public readings of the General Theory of Relativity. We reserve the esoterica of science, however valid they might be, for those who can understand them; but the present age for some reason wants things that are bound to be misunderstood to be thrust before the public.

So already our investigation has told us something that can be useful for distinguishing good from bad art: if the emotions can't be held in arrest, but are strong enough to tend to make us contemplate the action they refer to rather than the source that produced them, we won't have an esthetic experience but one in the general area of ethics.

The next thing to notice is that emotions not only react to the environment as reported by the sense organs, but simultaneously to the state of the body (its needs) at the moment. If the esthetic experience is going to tell us something about the outside world, this added subjectivity has to be circumvented.

Can it be? You will recall from Chapter 4 of Section 5 of the first part 1.5.4 how the subjectivity of sensation itself is circumvented. Given constancy on the part of the receiver, then relationships among the perceptions as effects of the energies "out there" will be the same relationships as obtain among the energies themselves that caused these effects. There is no need at all to assume that the sensations are "like" the energies in any way.

But that supposes constancy on the part of the receiver. You can't get identical patterns on your computer screen by pushing the same keys on your keyboard if the computer has a program in it that, depending on the distribution of data on the disk, changes the keyboard layout (like those "key redefiner programs.") One minute you will push the "p" and a "p"-shape will appear on the screen; the next minute you will push it and a "K" will appear. How can you know what you are typing in?

Obviously, if it can be done at all, it is more difficult. But it is something like what we discussed in the preceding chapter on probability, when we allowed the sides of our die to vary, but only within a certain range; there was still a way you could get an answer.

And so there is here. First of all, if you get two different emotional reactions to two different objects at the same time, then obviously this has to be due to a difference in they way they affect your emotions. The reason for this is that at any single moment, your body is in just one state, and so at this moment, the subjective component that instinct is monitoring is the same.

So when you are talking to Frank and John, and you feel pleasant feelings toward Frank and loathing toward John, then something about them is causing the different reactions. Hence, there is a real esthetic difference between them. That is an objective fact; they are not in fact acting as a whole on you in such a way that your emotional apparatus receives what they are doing in the same way.

Now this is not necessarily to say that Frank is lovable and John is hateful, in the sense that Frank has some permanent property of "lovability," and John has the opposite property. It might be, for instance, that at the moment you have a splitting headache that neither of them knows about, and Frank happens to be talking quietly, and John is shouting and giving you playful punches in the shoulder, which you generally respond favorably to. All you know from this is that in the state you are in, the two of them are esthetically different.

But secondly, you can talk about esthetic properties of a given object that are (relatively speaking) permanent if you discover that, no matter what state you happen to be in, you react in more or less the same way to Frank. Obviously in this case, the subjective component of the "input" into the emotion is what is now varying, and yet the emotion remains the same one; and this constancy in the emotion now must be due to the objective component. Thus, you can say that Frank is what the Spanish call simpático, which means something like "genial" or "pleasant to be with."

The point is that this aspect is objectively in Frank, even though you got it through your subjective emotions. But again we must be a little careful here. All you know is that Frank has something about his personality (his way of relating to you) that is something that makes you as a person react favorably, and enjoy being with him. You've got beyond your emotional state at the moment, but you haven't got beyond yourself as an emotional receiving-set.

And it is clear that as emotional radios, we seem to be tuned in to different stations. You tell someone how pleasant Frank is, and he says, "Oh, really? That snob? Are we talking about the same person?" You think he's simpático, and he thinks he's disgusting.

Before analyzing this, note how common esthetic understanding is. We tend to equate it with going to Florence and standing in awe before Michelangelo's David; but it's all around us. Every time a person uses an emotional word like simpático or "disgusting" or "pleasant" or "terrible" or "boring," he is using an esthetic, not a perceptive, concept. So, somewhat like Molière's bourgeois gentilhomme, you have up to now been speaking poetry half your life without realizing it.

But the difference we have confronted is the one that makes most people think that esthetics is subjective. Even if you can say that, objectively for you a person has an esthetic aspect of "pleasantness" about him--because he always makes you react this way, irrespective of the vagaries of your emotions--still, your general emotional condition is not necessarily like anyone else's, and so what is objective is objective just for each person personally. That is, the same set of traits that you find generally pleasant another person might very well find generally unpleasant.

How do we get around this?

There is no absolute way around it. But let us suppose that the person you are talking to is just about the only person you know who can't stand Frank. Everybody seems to speak highly of him, and enjoy having him around, and so on. The fact that your present companion can't stand Frank says more about your companion than Frank.

And what does it say? The basic program built into our bodies for adapting them to various environments is the same, because it is genetic. All cultures cry at a loss, smile when pleased, laugh when finding something unthreateningly incongruous, break into a cold sweat when frightened, and so on; so these things, unlike forms of address and whether you switch your fork and knife after cutting your meat, are not due to the culture, though they are, to some extent, modified by it.

So there is a fundamental constancy of emotional reactions, just as there is a fundamental constancy of the shape of the human face, in spite of the millions of variations on this basic pattern. Hence, if just about everyone reacts in the same way to Frank, then you can say that Frank has an esthetic property that is calculated to evoke a pleasant reaction in the human being as such. And if your companion can't stand him, this is because your companion's emotional reaction to him is abnormal, not because Frank has the esthetic property "for you" and doesn't have it "for him."

That is, you're talking about two different senses of "having an esthetic property" in that case. In the sense in which an object's esthetic properties differ from person to person but remain constant for each person, to that extent the property as a property is different from whatever it is about the person that evokes the same reaction in "the normal human being," which is discoverable in practice by finding out what the reaction is in most people.

Emotions are quite flexible, of course, and are modified by our experience as well as the personal differences in our genes. Hence, no individual person will fit the definition of "the normal human being" for all esthetic aspects of things--and, of course, at some times won't be what he himself normally is. But by finding out where your reactions differ from practically everyone else's, you can then, like the colorblind person, write this off as a special characteristic of yourself, and so can then look at your reactions as objective, but personal, and not consider that you have found something out about reality that you can share with anyone. I, for instance, don't much like Mozart's music; to me, it is pleasant but repetitious and predictable. But I know that Mozart was a great innovator in music, and that the things I find annoying are just conventions of his day which don't have much to do with what he was really saying. I understand all this, and can see what others see in Mozart's music; but I can't appreciate it esthetically the way practically every person of any sophistication in music is able to. Well, that's one of my esthetic shortcomings.

But the point is that, based on my personal reaction, I don't then say "Mozart was pretty mediocre as a composer." I have no grounds for making a statement like that, because I would be talking to other people, and assuming that they ought to give assent to it as something objective; but it is objective only for myself, not for everyone. It would be like the colorblind person saying "The stop and go light are the same color." This is true for him, if it means "They have what makes my eyes react in this way." But if he means it (as everyone, of course, does) "They have what makes eyes (and instruments) react in the same way," then his statement is no longer objective and true.

Where are we, then? There are objective esthetic facts, which we can discover. Some of these are very abstract, such as the fact of esthetic difference between John and Frank based on my momentary state (they are esthetically different at the moment), but this doesn't allow me to attribute any quasi-permanent esthetic property to them; it is like saying that a cloud is shaped like a horse's head, which might be true, but only for the moment you are looking at it, and only from the angle you are looking at it from; it doesn't say anything about the cloud as a cloud.

Secondly, there are objective and personal esthetic properties, based on the peculiarities each of us has as an emotional receiving instrument. These properties are "out there" in the objects (how else could we be affected by them?), but are not aspects of the objects that you can "share" with others in general, because they won't understand what you're referring to, because they don't get the same reaction.

To make myself perhaps a little clearer here, if you think of an AM/FM radio, you generally find it with a single tuning dial, and a switch to change from one type of modulation to the other. If you have it tuned to AM, you get a totally different reception from what happens if you leave the dial in the same place and switch to FM. You are picking up a different signal.

This is what happens in these personal but objective properties. You and the normal person are actually responding to "different signals" from the same object, because you have your emotional apparatus tuned differently. So there are things about the object "out there" that are making you react this way; but the complex of acts of this finite object because of difference in emphasis or even ignoring some acts (as we don't see, generally, what is to our side), starts a different subroutine working in you from what happens in the normal person.

The point I am trying to make here is that to make objective statements based on the peculiarities of your personal emotional receiving-set is a waste of time, because others will not understand you. Statements are public, and are spoken, not for self-expression, but to be understood by others; but if you have reason to believe that your emotional reactions are different from other people's, then they won't be able to understand the facts you are trying to tell them (the relations among emotional overtones of objects) for the simple reason that they won't get the emotional reaction you do to the objects, and so the relation you see won't be there for them.

This is what is esthetically wrong with Mr. Mapplethorpe's "controversial" photographs. Let us assume that he understood something profound from them, based, perhaps on a very intense emotional reaction that the photographs evoked. The trouble is that the normal human being (at least based on the title of the exhibit) gets an entirely different (though evidently fully as intense, to judge from the furor) emotion upon seeing them, and cannot understand what Mapplethorpe was trying to say with them.

To say, then, that Mapplethorpe was an artist (certainly in his other photographs, even many of his erotic ones of lesser degree, he was), and that therefore, "we should educate ourselves" to have the proper emotional reaction and so understand what he is trying to tell us is to say that it is incumbent upon us to train ourselves not to feel disgust but pleasure at these acts, so that we can understand how these fit into the "perfect moment."

But in order to do so, one would have to train oneself to overcome repugnance to an act that not only is objectively morally wrong (sorry, but it is, as I will try to show in a later part), but is damaging to the body, and very conducive to diseases like AIDS. All this in order to understand a fact? As well might Hitler's generals ask us to "train ourselves" to feel pleasure at making lamp shades out of people's skin so that we could understand facts based on this pleasure. We repudiated the scientific knowledge that was gained by the Nazis from torturing people (such as how much cold water a person can endure being exposed to before he dies). The same veto ought to obtain for esthetically known facts; the price to be able to understand them is too high. (Mapplethorpe himself, by the way, died of AIDS.)

Given that moral and prudential stricture against turning oneself into a copy of the emotional receiving instrument that is similar to Mapplethorpe's, then it follows that his statements are personal statements in his own personal language, which not only cannot be but ought not be understood by the normal person. If you want to invent your own private language and then speak in it, who is to stop you? But when you speak to others, you owe them respect as hearers, and you don't expect them to understand what you alone hold the key to.

That makes, as I will point out later, this kind of thing not art, because art is a statement of fact to other people, and this sort of thing simply does not state a fact to other people. In most cases, where sex--our present age's holy object--is not involved, these "statements" based on personal emotional aberrations are received as just funny. A bulldozer operator of poetic bent, who writes a tender ode to his machine, may be expressing facts about it that mean a lot to him, but others reading "Your tender lips curling round the gas pump's teat" are going to pain him by their "insensitive" guffaws. Let him write his poetry and read it to himself or to the Society of Dedicated Dozer Drivers; but it can't be called art, because it's not objective and public in the only way esthetic knowledge can be.

This is, of course, one of the problems artists have, and we will come back to it later when we discuss art. The artist never knows whether what "works" for him does so because of what his emotional apparatus has in common with mankind in general, or whether it is due to some peculiarity of himself or his culture. Ultimately, it is time that tells. He knows a fact. Can anyone else understand it? That he doesn't know.

One of the reasons for this is that there is not, as there is in perceptive knowledge, the further stage of objectivity I mentioned in Chapter 5 of Section 5 of the first part 1.5.5, where you can "consult" a scientific instrument or something totally different from a human being, to find out if that object also reacts in the same way to acts that give humans the same reaction. For instance, I mentioned that we see light and feel heat, but that instruments built to react to the electromagnetic spectrum record them as being the same kind of activity.

But there is no such thing as an instrument that reacts to the same things as our emotions react to, because there is no instrument that includes the state of the body in its input, and without that the emotions are not emotions. Hence, since emotions necessarily involve the human body as part of their input, with both the external data and the state of the body combining to produce the emotional reaction, this stage of consulting an instrument is simply out of the question.

But what that means is that the esthetic experience on its most objective level (the one where our emotional reaction is in tune with "the normal human being's" emotional reaction) tells us as much about human nature as it does about the fact "out there." The fact is what it is because the objects in question are such that they affect human beings in the way indicated by the relation. The meadow is smiling on a sunny day, and everybody understands what you mean by this, because the meadow is objectively such that it makes people feel more or less the same way as they feel when smiled upon. And people are objectively such that they react in this way both to sunny fields and smiling people. One who does not is abnormal.

I hasten to say that this is not to be taken as meaning that the person who is not affected in this way has something wrong with him. I mean "abnormal" in the sense that left-handed people are abnormal; they are just not like "most other people." There is no reason why human beings should be like "everyone else"; in fact, one of the functions of freedom is to free us from the fetters that the lower animals are chained with, in acting according to type. Eccentricity is perfectly permissible; but one who is eccentric is different, that's all.

What I was saying above about art is that it taps the "common core" of our ability to react emotionally; and this is why it is a "universal language," and is both international and transhistorical. Indian ragas, for instance can be appreciated by other than Indians; Japanese paintings, though made on entirely different rules from Western ones, are immediately recognized as breathtakingly beautiful by Westerners; and even the prehistoric cave paintings discovered in the last century show that human beings have been esthetically the same as long as there have been human beings.

Now this is not to say that cultural modifications in emotional reactions to things can't produce "culture-specific" art. I mentioned that there is probably a subculture in which Mapplethorpe's photographs can be contemplated as we contemplate Goya's majas. I was trying to say earlier that it is a very special subculture, and my point was that it was not worth it to belong to that subculture to understand what Mapplethorpe was driving at.

Similarly, my own generation enjoyed The Green Pastures and The Taming of the Shrew, but now, I hope, would have misgivings in seeing these, because we see more clearly the condescension toward the Blacks in the first and in the second we react adversely to a man's training his wife as if she were an unruly dog. In that play, Shakespeare was not at the truly human level, I think--at least, I hope not. We cannot, and I think should not, be able to understand esthetically what Shakespeare was driving at in Shrew, because it is disgraceful to feel what he expected his audiences to feel because of their callowness toward women.

Now it's quite possible to give a kind of prose summation of the play, and understand that. The theme is that it is possible to train people the way animals are trained, and that the people might turn out to be happier for it. Then, it can be shown that Petruchio was basically benevolent, and that the indignities he foisted on Katharina were with a view to making her more human, and so on. Further, you can give the plot.

But that kind of "translation" of a work of art into perceptive statements doesn't translate the esthetic statement itself. The understanding that Shakespeare was trying to share with his audience comes through the relationships based on the emotional overtones of what the characters say and do and what they look like and so on; and this is simply not sayable in perceptive terms, because these relationships are not the same relationships as the ones between what affects our external sense organs.

Hence, if you "restate" a poem in prose, to try to divorce it from the "mushy stuff" and just "say what it says," then you have said what it precisely doesn't say. It could only be restated as another poem; but that wouldn't be a restatement, probably, because the emotional overtones in the restated version would be quite different, and so the fact to be understood would only approximate what the original says.

This is one of the reasons why it is vital to read works of poetry, drama, or fiction in the original language. Translations are either prose restatements of the words and are like hearing a recitation of the lyrics of a song and pretending that you've listened to the song, or they are attempts to re-create the emotional climate of the original, and are therefore new works of art in the new language, works whose esthetic statement is fairly close to that of the original. I remember reading Don Quixote in translation years before I learned Spanish, and wondering what people saw in this ridiculing of a person of good intentions. Fairly recently, I read it in Spanish, and was introduced to a completely different book. I had more or less the same experience I had on reading what Swift actually wrote in Gulliver's Travels after having read the children's version of it when I was young.

There is nothing wrong with translating works of art; and I am grateful to many translators. However much the translations of Dostoyevsky and Sigrid Undset and Dante differ from the originals, I would have been able to get no glimpse of them at all without the work of the translators. You can approach somebody's statement if you understand what he is trying to do, just as conductors can approach what composers are trying to say; but the approach is asymptotic; you'll never hope to duplicate it; because it means what it means, and the sound of its words, the cultural background giving emotional overtones to its words, the rhythm of the sentences, and so on, can't be preserved in the other language; and so the translation has to mean something different esthetically.

People familiar with Kant can see in what I have been saying something that relates to his "judgment of taste," where he says that a person expressing an esthetic judgment states something subjective but universal, in that he expects others to react the same way he does.

What is behind Kant's notion of esthetics is that for him everything is what I would call "subjective," since it is the human mind's organizing of the data of sensation that is responsible for all knowledge. I mentioned his view in the section on subjectivity in Chapter 1 of Section 5 of the first part 1.5.1. For him, "objective" knowledge comes when the "understanding" collects the raw data into a coherent, lawful, whole by applying various rules of organization that he called "categories."

His esthetic theory takes its point of departure from one of the conditions for understanding's being able to do this. Before one actually understands (applies a category) the imagination, according to him, "collects the data" into a kind of bundle (not yet coherent) under various "schemata" or patterns. The difference between what the imagination does, I think, in Kant's philosophy, and what understanding does can be seen in the illogic of dreams (where anything goes) and the demand that what is objective be reasonable. So what imagination does is not bound by reason, and hence it is subjective, not objective.

So there is this intermediate stage of gathering up the sense data into a kind of set before applying the category that makes a unit (an object) out of them. At this stage, what is going on is subjective and the work of imagination; but since our imaginations, like our intellects, are the same, then just as the laws of nature (what is produced by understanding) are the same for you and me, so your applications of the schemata of the imagination will be the same as mine.

Hence, when I appreciate something beautiful, I utter a "judgment of taste," meaning that I have applied some schema of imagination to it, but have not yet understood it, and so can make no objective statement about it. But since I know that we are the same, I make a subjective but universal judgment about it, and expect others to appreciate it just as I did.

It's a brilliant analysis--if all of our knowledge knows only itself and never knows anything about what is "out there." But in Chapters 3 and 4 of Section 5 of the first part 1.5.3 1.5.4 I tried to show (a) how Kant's difficulty with knowing what is "out there" is solved by understanding relationships among sensations, and (b) how on his own showing, it would not be possible to explain on the basis of the mind plus the raw data of sensation why one object differs from another (i.e. why this set of data must be taken as an object and the data surrounding it cannot be included in it).

My explanation of the subjectivity--or rather, the relativity--of esthetic judgments is outlined above. They don't base themselves on imagination, but emotions, first of all; and emotions have a subjective component to the data themselves, but this can be circumvented in the ways I said. We make esthetic statements when we think that our emotional reaction is one of the ones common to human beings as such; otherwise, we keep silent.

Hence taste in the sense of "There is no accounting for tastes," or perhaps better, "chacun a son goût," recognizes the fact that people's personal lives and their culture modify their emotional apparatus as receiving instruments, and hence everyone's emotional reaction to things is bound to be to some extent personal. And in this sense, there is no correct or incorrect taste. If I like ice cream and you don't, if you like Mozart and I don't, then tastes differ, that's all.

But there is a sense in which you can talk about "good taste" and "bad taste." To the extent that one's personal taste differs from that of the "normal human being," or even to the extent that the culture's taste differs from that of the "normal human being," to that extent his esthetic judgments cannot be understood by others, and the facts he understands are inaccessible to others. Since we are talking about facts here, not just the emotional reactions themselves, then these personal facts just don't exist for other people, while there are public facts that can be understood by people who have their emotions in normal working order.

That is, it is possible to educate your taste so that you will be able to react emotionally in this "normal" way and then have the riches of some of the world's most profound statements available to you. You do it, actually, by viewing or listening to what are recognized as great works of art, with the idea that there is something worth while here, and if you don't see it, then that's because you're not looking (emotionally) in the right place, not that there's nothing to see. As time goes on, your boredom with Dickens or Mahler gives way to an appreciation of the deep insights they had into what we and the world are.

What you have done is trained your emotions so that they more closely approximate the emotional reactions of "the normal person" (the way you trained them to feel pleasure at eating olives, say), and trained your eyes and ears to notice and react emotionally to many more details than at first you could detect. Once you become skilled in this, you wonder how you could have appreciated the "2 + 2 = 4" of pop music or been able to stand the Harlequin romances that fill supermarket bookshelves. You now have taste in the objective sense, and can actually learn something about the world as well as what it is to be human.

So Kant was wrong. Esthetic judgments are not "subjective but universal." They are objective, and at the superficial level personal, but at the level of our common humanity, valid for all human beings.


Notes

1. In case you are interested in my "credentials" as an artist, and might be skeptical based on my claim in the preceding section to know something of science, I have actually made money selling odd-looking paintings, giving a one-man dramatic performance, and have sung in a high-class amateur chorus with the Cincinnati Symphony Orchestra under, among others, Leonard Bernstein, have written a couple of novels and plays, have composed a Mass which was actually sung in a church I wasn't in, have sculpted a couple of things, and written some poetry. Whatever the esthetic worth of any of this, at least it can be said that I've had "hands-on" experience with art, and am not up in my ivory tower looking at it from the outside.

2. Is this one of the reasons why so many artists are homosexual? No, I am not just uttering a stereotype, I am speaking from the point of view of one who has been among them. Homosexuals seem over-represented in the arts. Possibly this could be due to the fact that emotions that lead to acts that have such great social pressure against them (even tolerant "straights" tend to find the acts themselves disgusting) have to be quite strong not to be suppressed; and so homosexuals would probably be more emotional than ordinary people. And, of course, if that is the case, then it would follow that homosexuals would be more apt than ordinary people to use the emotional overtones of things as the basis of their understanding; which would make them gravitate toward the arts. This is not to say that there can't be emotional heterosexuals, by any means. I am not saying that artists tend toward being homosexual, only that homosexuals, if what I said is true, would tend toward being artistic rather than scientific.

3. A homosexual of my acquaintance was concerned about the exhibit, because he thought it would "give homosexuals a bad name," and reinforce the thought that this kind of thing was what homosexuals routinely did to each other.



Chapter 3

Esthetic facts and beauty

Having established, then, that esthetic judgments are objective, let us look a little more closely at the judgments themselves.

First of all, they of course have as their form an esthetic concept, which is the relationship itself and its foundation in the emotional overtones of the perceptions and/or images.

These concepts are what give us our emotional words, such as "pleasant," "disgusting," "laughable," "genial," "terrifying," "desirable," and so on. You will notice that these apply to objects, not the emotional state itself, just as "green" applies to objects and isn't the sensation which is the reaction to the green objects. Emotional words that apply to the emotion itself would be things like "happy," "depressed," "frightened," "hungry," and so on. When we use such terms, we are simply reporting to others that we have an emotion of a certain type (i.e. that some drive is operating) and are not saying anything about what caused it. So emotional words that describe one's emotional state are not esthetic concepts.

Esthetic concepts are potential relationships among objects based on relationships among emotions as their effects on us.

They are potential relationships, of course, because as concepts they are "detached from" any given individual object, and are merely the relationship itself and its foundation as a foundation of a relationship. "Pleasantness" is a relationship and as such is not actually connecting any objects.

The first thing to note about esthetic concepts is that, like perceptive concepts, they are abstract, in spite of what artists and estheticians are fond of saying about the "concreteness" of art as opposed to the "abstractness" of science.

Any concept is necessarily abstract, because it deals with only one relationship (and its foundation) out of the infinity of ways the objects can be related. It is obvious that an esthetic concept prescinds from the way the objects affect our senses, and is only interested in how they affect our emotions. When you say, "John roared at me like a tiger," you obviously are ignoring whether John's words might have the same pitch or volume of the tiger's roar; you are simply interested in the fact that they produced on your emotions the same effect.

But then why is it that art always involves some concrete object? The answer is first of all that it doesn't, necessarily. I remember the profound esthetic effect that the reading of the Christmas martyrology had on me in the seminary; all it was was a listing of dates (which were inaccurate), something like, "From the creation of the world, seven thousand years; from the flood, four thousand years; from Abraham's birth, two thousand four hundred years;... (and so on)...; while all the earth was at peace, the birth of Jesus, son of Joseph and Mary, called the Messiah and the Son of God." The requisite is that whatever is being said esthetically has to produce emotions that you can use for the basis of relationships, not that it be "concrete." Much of poetry's esthetic effect, for instance, depends on things like the placement of words, their sounds together, the rhythm, and so on as well as the images they evoke.

Well, but isn't that concrete? True, perhaps; but then there are people who get esthetic effects from mathematical theorems (they actually do), and who talk of the beauty of Euclid's proofs. Even Keats (I believe it was) thought Euclid looked on "beauty bare." So concreteness is not necessary; emotional overtones are.

But if the esthetic concept is to be more than elementary, then it's not going to be something we have a single word for. And in that case, the emotions are going to be complex, and you will have to awaken them in the other person in order for him to see the relationship you want him to understand. But since emotions are not evoked by abstract terms, generally speaking, you will have to confront him with some concrete object (either in his vision, his hearing, or his imagination) which will produce the proper emotions; then he will understand your concept.

So that is why art generally speaking has to be concrete; it's just that you can't get emotions without having something concrete to cause them. Few of us are capable of feeling emotions just by willing them. Even actors have to imagine themselves in the proper situation in order to feel the emotions that they then project by empathy to the audience.

But the concreteness of the cause of the emotional reaction should not make us think that the concept itself is concrete; it leaves out enormous amounts of information about the objects it deals with.

Let us look a little more closely at simplicity and complexity. As I said, simple concepts generally have words in the language for them, because they deal with experiences that people have frequently. Some more complex ones that are common have cliché metaphors attached to them, like "the smiling meadow," "the evening of life," "like a tiger," "meek as a lamb," and so on.

As I have said a couple of times already in discussing analogy in Chapter 7 of Section 2 of the first part 1.2.7 and in the preceding section of this part, metaphors express relationships based on emotional overtones, and analogies are based on the perceptive type of relations among causes based on relations among their effects. If someone says, "He came at me like a tiger," no one expects that this means he was on all fours and making inarticulate sounds; it simply means that he produced the specific kind of fright that "the normal human being" feels when imagining himself pursued by a tiger. And the same goes for "meek as a lamb." Wooliness and making "baa" noises have nothing to do with it.

Simple esthetic concepts, then, have common words and phrases in our ordinary vocabulary. But there are much, much more complex ones; and these can't be expressed in just one or a few words. They are what works of art express.

If you consider Michelangelo's David, for instance, you find that it is a whole esthetic treatise. The basic idea is that of Florence as David facing the Goliath of the rest of the world; but there is the worried frown on David's brow that gives an entirely different emotional tone to the statue from what one receives from the story in the Bible; there is the rather vacant look about the eyes, staring off at the horizon that makes you feel that David is looking at more than just a challenge, and is seeing into the heart of the universe; there is the whiteness of the marble and its coldness that makes it very different in its emotional effect from a full-size replica I once saw in Cincinnati that was made of painted fiberglass. There is its size and the obvious strength of the man that has its emotional overtone, as well as the perfection of his body, without an ounce of fat. There is, for those who know the history of art, the fact that this is obviously an Italian peasant and not a Greek god, and yet just as obviously has the Greek gods as its pattern. For those who know the history of the statue itself, there is the emotional overtone connected with the fact that this came out of a piece of flawed marble that Michelangelo saw in a yard and thought he could use by carving around the flaws. For those who believe that Michelangelo was homosexual, there are the erotic overtones connected with the fact that he carved such a statue. And for me, on my trip to Florence without being able to see the original in the museum (which was closed, as everything in Italy seemed to be, for repairs), and saw the copy in the square where Michelangelo intended it--but with a face fouled with bird droppings--there was the emotional overtone of the commentary on the people of Florence who would let this happen to the symbol of their noblest spirit.

Not that I have exhausted the things you can notice and their emotional overtones; I have barely scratched the surface. All of these are interrelated and unified by that one statue in that one city; and they say something as complex as this book with its thousands of pages is saying in the perceptive realm. Somebody who goes through the square and looks at the statue and says, "That's nice. Come on; the book says we have to see the duomo," "understands" what Michelangelo was saying about as much as someone who looks at the cover of this book and flips a page or two and says, "Oh, I see; it's about reality." No one would pretend that he had "read" the book by doing this; but people say of things like the David, "What do you want to go back and look at it again for? We saw that yesterday."

Really complex works of art do not tire; as Kant pointed out, they are "inexhaustible." Every time I hear Beethoven's Fifth Symphony, I have a new and slightly different esthetic experience, because in the state I am in, I am receptive to some emotions that I wasn't the previous time, I notice sounds that I hadn't heard because I concentrate at different times and let my concentration lapse at different times, and so on; so the unified effect of all of this is different, and I understand a slightly different fact about that set of sounds from what I understood the last time--and simultaneously, of course, know something slightly different about myself as an emotional being. Beethoven's genius, like that of all great artists, was to be able to weave into a meaningful unity all sorts of variations on the emotional overtones his music would produce.

Note that there are two different kinds of complexity: the internal, based on the emotional impact of the shapes, colors, or the pitches, volumes, timbres, and so on. Each of these components of a painting or sculpture or a piece of music or poetry or drama has its own emotional impact; and they intermesh with each other into a complex relationship. This is why the wrong meter, say, can destroy a poem; because it doesn't fit; or a different rhythm can make the same melody into a totally different piece of music. Meredith Wilson in The Music Man used the same tune for "Seventy-Six Trombones," as a stirring march, and "Good Night, my Someone" as the love musings of Marian the Librarian; and the difference was that one was in march time and the other in waltz time.

Music without lyrics is almost completely internally complex in this way. The esthetic concept consists in hearing the sounds and feeling their emotional impact, and then hearing them again after contrasting sounds, so that there is then a different emotional impact of the same sounds; and it is this interplay of sounds and emotions that is basically what is esthetically going on.

The complexity can be extreme. Someone has analyzed Bach's St. Matthew Passion simply by length of parts, and found that it is full of nested "golden proportions." For some reason, a work of art (of any type) is esthetically satisfying if it divides in such a way that the shorter part is to the longer as the longer is to the whole (this is not a simple fraction, by the way, if you want to figure it out). For instance, one of the reasons why Happy Birthday is a perennial favorite is that "Happy birthday dear ..." has the syllable I have italicized at the golden-proportion point. It seems that the St. Matthew Passion has each little part divided according to the golden proportion; and these parts fit together into subsections which are related by the golden proportion; and these into larger sections also related the same way, up through a dozen or so layers until the final work's two parts are divided by the golden proportion. Unfortunately, we rarely hear it that way, because conductors, who don't have quite the esthetic sense Bach did, cut out parts of it to get it into performable compass.

But of course Bach's St. Matthew Passion is not pure music by any means; it involves the lyrics which are the text of Matthew's Report of the Good News, as well as various poetic commentaries and traditional hymns. And this is external complexity. The passages of the Bible have their emotional overtones, and so do the poems and the hymns. Each of these emotional overtones must fit into the emotional overtones of the music for the music to "work" as a piece of art--otherwise, we will be confused by it, rather than understanding something.

So Bach was not merely saying something about the interrelations of sounds; he was using those interrelations of sounds to say something about the death of Jesus; and because of the meaning of the music as music, we understand something that much more profound about the esthetic meaning of the death of Jesus and its relation to ourselves both as its cause and is beneficiary.

Some art has almost nothing but external complexity. The Christmas martyrology I mentioned above connects, just by mentioning the names on Christmas day, the emotional overtones involved in the Creation and the various events and people in the Bible, and gives an esthetic meaning to Jesus as the culmination of history up to himself and the beginning of history from then on. In itself it is just a list; and if you don't have any particular emotional reaction to the names and events mentioned, it is a pretty boring list; its esthetic meaning comes through and by the emotions these names evoke in the Christian.

The same sort of thing goes for such apparently "simple" songs like Were You There When They Crucified My Lord, which is just a tune, not much more than an arpeggio of a major triad, with no tricky rhythms or harmony like many Gospel songs; but the lyrics put you there, looking up at him, and the "Oh!--Sometimes it causes me to tremble!" is so vague in itself that it collects around it all of the conflicting turmoil of what that event produces in the believer who imagines himself there. It can be devastating in its esthetic impact if sung by one who knows what he is doing, and listened to by someone who is receptive to all the nuances.

So concepts can be either simple or complex; but they can also be more or less clear. In the perceptive realm, it is often the case that you understand something, but haven't weeded out just what the "hooks" in the object are that allow you to relate them in the way in question; your concept is not clear. For instance, when I was writing about logic a couple of sections ago, I knew that there was something about contemporary logic that was wrong, because it made supposedly sound conclusions that I saw could be in fact false; and I had some idea that it involved the way connectives were used. But it wasn't until I investigated these and tried out several possibilities, alternately convincing myself that I had shown the flaw in contemporary logic, and then being convinced that I was the one who was wrong, that I hit upon what you have read. For me, at least, my concept was clarified by what I went through.

Similarly, esthetic concepts can relate objects without being clear what it is about their emotional overtones that connects them--even though you may be aware that they are related esthetically somehow. Works of art can be unclear in this way.

For instance, allusions to things that practically no one has read make some of T. S. Eliot's poems unclear; he even had to publish footnotes for The Waste Land. Obviously you can't connect something emotionally if you don't know what it's connected to, or don't have the "objective correlative" that he talked about. What he was trying to do was bring in the external complexity of having people recognize the phrases he was quoting and add to his poem the force of the poems and so on he was alluding to. But it won't work if people don't recognize that your line is a quotation.

Lack of clarity should not be confused with complexity. As long as I mentioned Eliot, consider the opening three lines of The Love Song of J. Alfred Prufrock:

Let us go, then, you and I

While the evening is spread out against the sky

Like a patient etherized upon a table.

This also serves to show what the difference is between esthetic and perceptive concepts. Obviously, if there is any relation between the sky and a patient, it is emotional, not visible. Note that the evening is "spread out" against the sky, evidently (from the next line) in the sense that a sleeping person is spread out on a bed. But the evening is drugged, unconscious, but also facing a crisis, because it is on the operating table. That is, there is something quiet but ominous about the way the evening feels--as Prufrock is about to go into the room where "the women come and go/ talking of Michelangelo" and is afraid to say anything meaningful about life or they will sneer politely at him.

The phrases seem puzzling as you first read them; but once you get beyond the idea that two images must visually resemble each other and concentrate on how you feel as you read the words and picture what they evoke, you see that the lines make sense as setting the tone of the poem.

Lack of clarity is not to be confused with what is called "ambiguity" in art. Very often in a poem, for instance, words are used with two different perceptive senses and both are intended because they are emotionally connected. Robert Frost's Stopping by Woods on a Snowy Evening illustrates ambiguity in the last stanza:

The woods are lovely, dark and deep,

But I have promises to keep,

And miles to go before I sleep--

And miles to go before I sleep.

Up to the last line, looking at the woods has been about the woods themselves and the calm and ordinariness and peace of them; but the repetition of the penultimate line as the last line of the poem makes "sleep" take on the feel of the "sleep" which is death; and the whole poem suddenly becomes an attitude toward life and death. The point is that this is a legitimate attitude toward life and death; and as soon as we hear it, we realize that it is true, if by no means the whole truth on that subject. It is an esthetic way of saying what St. Paul told the Philippians: "I don't know what I'd rather have; I'm torn between the two. What I'd like is to say goodbye and be with the Prince; but staying in my body might be more useful for you."

I believe it was Frost who was once chided by a scientist about how exact science was and how inexact poetry was. "Is that so?" he is said to have answered. "I just spent a whole week looking for the inexact word."

So perceptive ambiguity does not mean esthetic ambiguity; that comes from works of art where one part esthetically says one thing and another part something different. For instance, the rhythm of the following gives you the impression that the poet is talking about a dance:

Lift her up tenderly,

Lift her with care,

Fashioned so slenderly,

Young, and so fair.

But the woman is, as I remember, a prostitute whose body is being fished out of the Thames. The poet probably thought that the rhythm would evoke the incongruity of her beauty and what happened to her; but to me, at least, the rhythm is joyous, and even frivolous, and it jars with the esthetic effect of the imagery.

In the early days of Rock 'n Roll, it was interesting to me to see how the lyrics had absolutely nothing to do with the music. They could be love ballads or social comments or even a description of "splishin' and splashin'" in the bathtub; it didn't matter. They were the plain paper wrapping for the music, which was invariably The Joy of Sex. As Rock advanced and developed into Rap, the tunes became more and more of a monotone, and the lyrics referred more and more explicitly to what the music and rhythm was about all along. Rock and Rap produce unified esthetic impressions now--to such an extent that a record in which a young rapper rejoices at "getting yo' pussy busted" has been banned as obscene. It's about time someone recognized what this sort of thing has clearly been saying for decades and decades.

And this brings up the subject of unity. Obviously, since a concept is the grasp of a relationship, then there has to be a unity in what is esthetically understood.

In natural objects, like landscapes and sunsets, we tolerate things that don't belong, because we don't expect nature to be arranging itself for our esthetic understanding. Hence, a tree in the wrong place in a landscape doesn't take away from the beauty of the landscape--until you make a photograph of it. Then it annoys.

Why is that? Because a photograph is a work of art; and a work of art is supposed to be making a statement. You don't put words into pleonastic a statement if they don't belong and contribute to its meaning. If you do, people get confused, because they presume you're not an idiot and so they try to fit it in somehow. Similarly, if there is something incongruous in a work of art, the viewer will think that it is there intentionally, and will try to see a meaning with it in there, with the result that he will fail to understand what you did say. (You did catch what I was doing with "pleonastic" above, didn't you?) This doesn't happen with nature because we don't try to unify everything, and so simply ignore what doesn't fit.

Slightly different from the clarity of a concept is its precision. Teachers of writing are constantly urging students to be "concrete" and "precise," when often what they mean is to be clear. Unfortunately, students are apt to think that substituting a concrete object for a general term makes what they are doing precise and snappy, when often it makes it ludicrous or boring. I remember a philosopher who likened Josiah Royce to "two eucalyptus trees." Why two? Why eucalyptus? His paper gave no hint of this, though the trees kept sprouting at every other page. It was concrete; but the concreteness added nothing to what he was trying to say, and in fact distracted from it. The fact that I remember the two eucalyptus trees and nothing whatever else about what he was saying after what must be fifteen years shows how such incongruous concretion can overwhelm the point. I am sure you remember some clever television commercial without being able to recall the product it was trying to sell, because it called attention to its own cleverness, not its product.

There is a mystery writer, whose name escapes me, whose detective, a Bostonian, roams streets that I knew from childhood. But the writer is infected with this disease of "concreteness," and even I got tired of hearing every business establishment along Route One described as the hero drove up it. A good writer like Dickens uses descriptions to contribute to the emotional atmosphere he needs for his novel, not to shout, "See how much research I have done!" After you've read the Cliff Notes of one of Dickens's novels and see what the plot is, you can read it and notice how beautifully the extended descriptions fit.

While I am on the subject of Dickens, I think it worth while to point out that his characters are often seen as one-dimensional, because each of them has some stock phrase or gesture that makes him immediately identifiable. Mr. Micawber is always stepping back for a leap forward, Uriah Heep is so 'umble, Fagin is always saying "My dear," Mrs. Jellyby is so concerned with the African natives, Mrs. Dombey is so proud, and so on.

But this sort of thing is only at a very superficial level, and the characters actually are very subtly drawn. The device of the tic that each of them has is that Dickens realized that in a novel of more than eight hundred pages, with dozens of characters coming in and going out and weaving their lives together, something more than just names would be needed to make the reader remember after a lapse of a month (the novels all first came out serially) who was who. So the identifying tic is nothing more, really, than the literary version of what a costume designer does on the stage in dressing his characters in distinctive colors so that each will be distinctively recognizable.

In this case, then, the concreteness of what Dickens does with his characters has an esthetic point, and lends clarity to what is an exceedingly complex work. So individualness and "concreteness" can sometimes be clear and precise, and sometimes not.

A concept is precise when it leaves out anything that is not relevant to it; and a term or phrase or image is precise when it says just exactly what is meant, no more and no less. This is not the same as clarity, though it is closely connected with it. For instance, the definition of an infinite set two chapters ago was precise, but not clear. It showed, as you recall, how you could determine whether any object in the universe was a member of the set or not (and so it was precise, excluding what was not relevant); but it was not clear, in that it did not define what was meant by "all" in the sense of "and no more."

In esthetics, there are such things as general esthetic concepts, as well as concepts that are tied down to a definite small set of objects. To substitute the more "concrete" ones (those of less scope) by using imagery that is too definite is to give the impression that the concept is more restricted when in fact it is still the general idea that you are trying to get across. Thus, too much "precision" in this sense produces a lack of clarity.

In the respects above, esthetic concepts are not really different from perceptive concepts; but esthetic concepts have a property that perceptive ones don't have: that of intensity. Depending on how strong the emotion is that is the basis of the esthetic comparison, the esthetic experience itself will be more or less intense. The understanding as such is spiritual, of course, and has no degree; but just as our perceptive understanding is always connected with an image, so our esthetic understanding is not divorced from its emotional base; and so the experience as a whole is both intellectual and emotional (as well as involving other sensations, of course).

There can be simple esthetic concepts that are very intense (what critics call "powerful"). I mentioned that Were You There wasn't as simple as it seemed to be on the surface; but it clearly has nothing like the complexity of the St. Matthew Passion, which is "about" the same event. But some people find it more intense than the Passion, if only because it is all so concentrated.

But of course there are extended, complex works that also are extremely moving. I remember my little son's coming into my living room as I was listening to a recording of Die Meistersinger, and being frightened at seeing me sitting there staring at the stereo with the tears streaming down my cheeks. I happened to be in a super-receptive mood at the time, and it was too much for me.

This ability art has to overpower people has no counterpart in the perceptive realm. Since the emotional impact is connected with understanding something that is recognized as true, the combined experience seems to be that the material world is torn open before your eyes, and you are staring straight into the face of God. C. S. Lewis, in fact, in Surprised by Joy, connects this experience with God; and if I remember correctly, credits his conversion to Christianity to it.

But of course, it isn't really anything that cosmic; it's just that esthetic understanding carries emotional freight along with it, and when the emotions are very intense, they invest the concept generated from them with a special sense of importance. One time when Robert Shaw was conducting the Cincinnati May Festival Chorus in a performance of Handel's Messiah, his pre-concert pep talk to us gave me, at least, the impression that he was convinced that music and nothing else was going to save the world. I am sure that this was because of the intense esthetic experiences he had connected with it. He was deluded, unfortunately.

In connection with intensity of the esthetic concept, I think this is the place to speak of Aristotle's famous "catharsis" (purging) of pity and fear that he says in the Poetics is what happens in a tragedy. His idea is, I think, mistaken as I understand it; but it points to something like what I was talking of above. From what he says, I gather that the experience of a tragedy where you are watching some absolutely horrible event that you know isn't really going on there before your eyes is something like what people do when they take roller-coaster rides. In a roller-coaster, you feel as if you are going to fall to your death, but at the same time you know that you are perfectly safe; hence, you can deliberately experience the fear as a sensation, without bothering with what you are supposed to do about it.

In that sense, I think that enjoying roller-coaster rides (and tragedies too, insofar as they are experienced in this way) is supporting evidence for my position that goodness and badness and pleasure and pain are subjectively defined. I mentioned this in the section on the sense faculty in Chapter 5 of Section 2 of the third part 3.2.5, and in Chapter 10 of Section 5 of the first part 1.5.10 in discussing the subjectivity of goodness and badness.

In any case, I think that what Aristotle was saying was that the fact that you know that there are actors in front of you makes you not throw up or rush to Oedipus' help when he comes on stage with his eye-sockets streaming blood; and so the emotion of pity for him and fear that this might well have happened to you in the same situation are "purged" by the laxative of your awareness that it's all "just pretend."

I don't really think that that's what's going on in tragedy. It might be what happens in horror movies or the type of movie nowadays that revels in how realistically it can show human entrails being gouged out of people.(1) But tragedy goes beyond this. You experience the horrible emotions of pity and fear (and disgust, seeing Oedipus, for instance)--emotions which you would ordinarily avoid--but you do so in a context where the horrible events that happen to the hero make esthetic sense, and so the problem of evil is esthetically solved for you. You see how the hero brought this retribution on himself, and, horrible as it is, how it is just and fitting; but you see this esthetically, through your emotions, and not just as an abstract perceptive fact; hence you understand it in that "other" way we have of understanding.(2)

And since pity and fear are unpleasant emotions, they tend to be more intense than pleasant ones; and so the tragedy tends to be one of the most powerful of esthetic experiences. But it must be connected with this realization of a fact about the evil that happens in this world for it not to be merely a roller-coaster ride.

There was a movie some years ago called Jeux Interdits, which had to do with a child who was running away from Nazi strafers with her parents, and when she got up after the planes had passed over, found both of her parents and the little dog she carried shot. She wandered into a farmhouse where there was a little boy, and the two of them buried the dog, taking a gravestone from the cemetery to mark the place. Later, they began burying other animals this way (these were the "forbidden games"). But at the end of the film, the girl was simply taken away from the family she had grown so attached to. I knew the plot, but was able to sit through only half of the film, partly because I simply could not stand the point of it, which was that none of this made any sense; bad things just happen. And of course that too is true. But it was too much for me. Of course, part of the problem in my case was that at the time, I had two children just the age of the two in the film (they looked a good deal like them, in fact, since I am of French descent and so is my wife in part), and the girl happened to be named Paulette and the boy Michel--and my children are Paul and Michele. This is a good example of how circumstances of one's personal life can invest something with a meaning it can only have for oneself, but which is none the less valid for that. In any case there was no "purgation" of the emotions for me, however good the film might have been for the normal person.

The point here is when the emotions are so intense that they overwhelm the idea, the esthetic experience is lost. It is like what I said with respect to pornography a while back. Here the idea conveyed was too feeble to support the emotions I felt (I would find it difficult to imagine an idea so profound that it could have sustained the emotions I was feeling); and so it was trivialized by the very emotions it sprang out of. On the other hand, Othello or Madama Butterfly mean something that makes the agony of watching the heroes destroyed worth while.(3)

So this theory of the esthetic experience seems to make a lot of things about art fit together; which gives me the notion that it must at least be on the right track.

Before talking about beauty and ugliness (Haven't I been? No.), there is just one other thing about esthetic concepts and facts that needs mentioning in this sketch: the fact that there is such a thing as esthetic logic. In a work of any complexity, there will be lesser insights that are understood and go together into a larger whole, just as in this book, there are the words, the sentences, the paragraphs, the chapters, the sections, the parts, and finally the whole, which can be summed up in one single statement: "This is how reality is related to experience."

Esthetic concepts and judgments (and their expressions) connect together in ways that are entirely different from perceptive judgments and statements; and as a matter of fact, one of the most common fallacies in art is to join the parts together by perceptive logic rather than esthetic logic.

It sounds a little odd to talk about the "logic" of the parts of a painting, but actually, you don't see a painting all at once, as I mentioned in the section on the sense faculty in Chapter 5 of Section 2 of the third part 1.2.5 where I was discussing the time sense--even though the experience is, as I mentioned there, in another sense timeless. But the fact is that paintings are so arranged that your eyes tend to be led from one part to another by lines, colors, and shapes, in a very definite pattern; and so, although it seems to you you are just looking at the painting as a whole, you are noticing parts of it in sequence and not randomly. This is the visual logic of the painting.

A painting's esthetic logic, however, is somewhat different. For it to "work," the esthetic meaning of the various parts has to follow the sequence of the visual logic, so that there is a progressive deepening of the partial truths that go to make up the painting's whole impact. You must not only understand how what you now notice builds upon what you saw a moment ago, but also where you are in the process of noticing and how far you have to go before you get the point. It is this awareness of being in the middle and not completely understanding what is there that, even in a painting, makes it dissatisfying to be called away from it before you have finished looking at it. With music and drama and novels and so on, this is a little easier, because you can know just by looking at the program (if the editors were kind) or seeing the number of pages left to go. But the experience of being lost in the middle is analogous to what happens when listening to a badly organized speech. It seems interminable, not because the speaker isn't saying anything (though too often that too is true) but even when he's saying too much and you don't know where he is in the whole thing, how he got here, and how long it will take him to shut up.

The esthetic logic of the various works of art is what is codified in the "rules" of the art in question. In painting, for instance, the rules of composition are the rules of the esthetic logic of the painting; in music the study of harmony and other aspects of composition give you the rules of how our ears follow things and how the emotions connected with them also follow. Aristotle did a pretty good job of giving the esthetic logic of the drama of his time, though of course much of it has loosened up as time has passed; we no longer think of it as an esthetic virtue to have a drama happen in what we now call "real time."

If the logic of the work of art is violated, it produces confusion, not understanding; people don't see how the parts fit together.

Then why is it a dogma of art nowadays (and for past centuries also) that "Rules are made to be broken"? Nowadays, in fact, one of the most rigid rules for art that it's no good unless some accepted rule has been broken. And artists, who have been breaking more and more rules, have, in their slavish attachment to this rule, begun breaking the rules of common decency to make their art viable and "strong." I was once talking with an art professor in my college, who showed me a set of paintings of a girl as a Freshman and then as a Senior. The first set was competent pictures of flowers; the second a series of animal skulls. "See how she's improved? he said. "Those are strong." I could see no significant difference in them beyond the fact of the choice of subject.

But aside from breaking rules in order to follow the rule that you must break rules, the fact is, as Kant pointed out, that geniuses do break rules. In fact, Kant defined genius as the capacity for making rules, in the sense that the genius, in breaking the established pattern, has created a new pattern for people to follow.

And of course this is what is behind all the rule-breaking. The genius, as I mentioned when discussing abstraction in Chapter 4 of Section 3 of the second part 2.3.4, doesn't organize his perceptions (and/or emotions) in the same way normal people do, with the result that sensations get connected with what at first sight seem totally unrelated other sensations; but once this is done, understanding can see a relationship, which obviously was not seen by anyone before, because no one had connected the objects in which was its foundation.

Geniuses, precisely because energy does not flow along the paths it does in a normal brain, also have a different sort of spontaneous logic within them; one which can be consciously imitated by those who follow them, but which is natural for the geniuses themselves. When a person of this type sees a series of relationships and recognizes that it leads to something true, then, trusting that he has hit upon something valid, he ignores what he has been taught, and goes his own way.

To the extent that what he saw is true, and to the extent that he expresses himself clearly, people can follow his logic and understand what he was saying. And once they see that the new logic describes the world (and/or in esthetics the human way of reacting emotionally to it), the new logic "catches on," and we have new rules for that field of investigation.

This happens in the perceptive realm also. Newton's Principia Methematica Philosophiæ Naturalis introduced, with the "fluxions" (his version of the calculus), a whole new procedure (i.e. a new logic) in "natural philosophy," which was seen as valid until Einstein's new approach and that of quantum mechanics superseded it. This is the same sort of thing that I was describing, even though slanting it toward art.

So the true artist is not necessarily trying to be "innovative" and "break new ground." Very often those who are breaking new ground leave behind nothing but a hole. No, the artist has seen something which cannot be expressed using the old rules, and so new ones must perforce be invented, because to follow the old rules is to falsify the insight. As you read this book, you have probably noticed how many neologisms there are (my spelling checker certainly has); but they are there, not because I am interested in coining new terms, but because the ordinary terms are misleading. In this very section, for instance, I used the term "individualness," because "individuality" has a different meaning from what I needed at the time. The same goes for the "breakthrough" artist.

One of the reasons, of course, that so many of our contemporary artists break the rules is because they really have nothing to say, and following the rule of breaking the rules is the easiest way of sounding as if you're saying something--because people can't understand you, and if they see your stuff hanging in a gallery, they're generally humble enough to blame themselves, not you. Every genius is misunderstood in his own time, perhaps; but it doesn't follow that everyone who is misunderstood in his own time is a genius. This is nothing special about my time, I think; most of what is produced by way of art in any age is nothing new or profound, just as most of what goes by the name of science in our scientific journals is pretty pedestrian stuff.

One way, actually, that you can tell the artist who has something to say from the one who is laying down a smoke screen over a desert is that the former gives the impression that he is trying very hard to make himself clear. He doesn't use highfalutin, pretentious phrases, artistically speaking; he just talks--it's just that the way his words go together doesn't seem to make sense until you shift your perspective.

This again happens in the perceptive realm. Those "scientists" whose works are full of jargon are trying to hide the fact that they've got nothing to say, while those who have something new to say tend to use ordinary terms whenever possible. This is not always true, of course. Kant's works bristle with technical terms and tortured syntax, and yet he had a brilliant and profound new approach to things. There is no law that says that a person has to eschew the fancy word that expresses his meaning just because there is a simpler term that almost does the job. He will be less clearly understood by more people in the latter case; but some people would rather be more clearly understood by fewer, and so take the exact if more unfamiliar term.


Notes

1. I remember that when my son was very little, he had no trouble watching cartoons in which all sorts of mayhem was perpetrated upon the characters. But once "Uncle Al," one of those hosts of a live show for children, was in a Halloween special playing Hansel in Hansel and Gretel. My son was jumping up and down and screaming for me to do something when Hansel was about to be put into the oven. He knew it was Uncle Al, and Uncle Al was a real person--and so for him there was no "catharsis."

2. This is not to deny Aristotle's point that in order to achieve esthetic distance, allowing you to see these horrible things for their meaning, you have to know that they're not really happening. Hitler's agents making lampshades out of human skin, might have had an esthetic experience, but only to the extent they dehumanized themselves.

3. Not that I go to see this particular play and opera any more. I seem to be becoming more sensitive to the sufferings of others as I grow older, and I've already learned the basic thing these works are trying to say, I think; and it seems less and less worth while for me to watch other people suffer, even if only "just pretend," for the sake of learning new esthetic details.



Chapter 4

Beauty and art

Finally to come to the subject of beauty, I mentioned earlier that beauty is not the same as an esthetic fact, or even an esthetic property, but is what in esthetics corresponds to goodness in the perceptive realm.

First, let me make a little clearer what I mean by an esthetic property. You will recall from Chapter 4 of Section 2 of the second part 2.2.4 that properties are modes of the finiteness of a body, based on similar effects upon us (or some machine), when those same bodies who are thus similar are different from each other in other effects they have on us.

I am stressing this because a perceptive property like greenness is thought to be something like a part of the object, a distinct, separable (at least in thought) something-or-other about it, to such an extent that philosophers like Locke and Hume thought that bodies were just collections of properties. But the reality is the other way round. The object is a unit, though a multiple unit of parts; but it is a finite unit, and so its finiteness as a unit contradicts itself into the multiplicity of its behaviors as a unit; and its unity is in its multiplicity and its multiplicity is in its unity. The finite, remember, contains its own opposite as defining itself.

It is actually a little easier to see this when discussing the esthetic property of something. The meadow actually has smilingness when the sun is shining; because everyone recognizes that it smiles in the sunshine and is not smiling on a day like the day I write this, overcast and gloomy. But it is clearly the behavior of the whole complex as a whole that is capable of affecting me in the same way I am affected by someone's smiling at me; and here the behavior cannot be "separated out" from anything about it, the way greenness can as that which affects my eyes.

That is, what is it about the sunny field that gives it the same power over my emotions that a smiling face has? It obviously has something to do with the light and the color, because a brown or yellow (or for that matter, a purple) field wouldn't be felt as smiling. But what is this something that it has?

There is no answer to that question--as, in the last analysis, there is no adequate answer in the perceptive realm. Grass and emeralds react in the same way as units to light falling on them; but what they do in absorbing some energy and flinging away other energy is unknown; all we know is that the energy they throw away is what produces the "seeing green" sensation in us. But it is the object that has the color, not really the light, and certainly not my eyes. Similarly the sunny, smiling field's pattern of light in its ability to affect my emotions is similar to what the pattern of light of a smiling face is; but in itself we know nothing about either, except the fact that the two are somehow objectively similar.

Therefore, what it is about them that connects them in the way understood by the esthetic concept is the esthetic property. It is in itself no more mysterious (though no less so) than any perceptive concept, and it is no less objective than any perceptive concept either, as I have been at pains to show.

With that out of the way, let me give the following definitions:

Beauty is an esthetic property one expects to find in an object.

Ugliness is the lack in an object of an expected esthetic property.

Philosophers, from Plato and Aristotle through Augustine and Aquinas--up to Kant, actually--have thought of beauty in terms of what we today would call "prettiness": as the traditional Thomistic definition has it, "That which, when seen, pleases." Aristotle's notion of catharsis, in fact, was an attempt to show how having vicarious horrible experiences could be pleasant.

But Kant shifted the ground with his notion of the esthetic judgment as being subjective but universal; and contemporary art seems to be holding that only what is unpleasant can be beautiful--or the term is taken in its traditional sense, and art is then declared to have nothing to do with beauty but "meaningfulness."

But I think that what people are looking for in art is an esthetic experience, which does not necessarily have anything to do with a pleasant emotion. And when they have the esthetic experience, they tend to say, "How beautiful that is!" Hence, beauty is (a) something in the object, (b) something looked for a priori in it, but (c) not necessarily something that produces a pleasant emotion. I think my definition fits all of these specifications.

It also explains why "beauty is in the eye of the beholder." The esthetic fact itself and the esthetic property are something objective and "out there" waiting to be understood. The meadow is objectively like a smiling face. But the beauty isn't, except in a derivative sense, because it depends on what you expect to see.

Many is the person, for instance, who stands before a painting by Jackson Pollock and says, "What's that supposed to look like? I could do as well myself." That same person would never dream of asking what a Bach toccata was supposed to "sound like," because he didn't expect to get the esthetic effect by comparing it to street noises. But he expects a painting to resemble some visible object, not simply be a set of colors and lines and shapes each of which has its emotional impact, and whose emotional impacts are interrelated in a logical and meaningful way. He doesn't see the logic because he's looking for a different relationship; and so to him the work is ugly.

Have you ever noticed that a face you first thought of as ugly takes on a beauty as you get to know the person behind it? Instead of seeing it in its relation to the regular features and so on of "the perfect face" of your sexual drive, you now see it as the expression of the personality of the person; and to the extent that it reveals what you find spiritually congenial in that person, to that extent it is beautiful to you, if not pretty. That is what I am driving at. You understand what the face says; and it says now what you expect it to be saying.

Of course, one of the reasons why people equate prettiness with beauty is that most people expect things to be pleasing. Obviously, as people become educated and their taste becomes more refined, as I mentioned above, then their expectations change and what they consider beautiful also changes. As I mentioned, artists nowadays (of all types, it seems, including musicians, architects, everyone) have expectations that make them seem to look on everything pleasant as ugly and only what is unpleasant or jarring as beautiful. Of course that fits in with our present-day philosophy in which life is senseless anguish. I hope this book might contribute to turning this around, and we can seek beauty once again in what is pleasant, not denying, of course that what is unpleasant can be esthetically meaningful--and so beautiful--too.

Beauty is called one of the "transcendental properties of being," which I gave such short shrift to in Chapter 13 of Section 5 of the first part 1.5.13. Of course, as I mentioned there, you can consider any being as beautiful, because you can adjust your esthetic expectations to fit its reality, in which case it will match your expectations and then be beautiful. And since every perception has an emotional overtone, because instinct is always operating when the senses are, as I said in Chapter 5 of Section 2 of the third part 3.2.5, then any object can produce an esthetic concept, and so has an esthetic property of some sort.

But this brings up the distinction between beauty and esthetic truth. The esthetic property is there in the object, because in fact the object can affect your emotional apparatus in a certain way, which can be related to other objects which affect you emotionally. Thus, you learn an esthetic fact about the object, which is objective and has nothing to do with your expectations of it: it is either such that it affects your emotional present state in this way, or that it affects your normal emotional apparatus this way, or that it affect the normal human being in this way emotionally.

Esthetic truth occurs when you attribute the esthetic property to the object and it is really there. That is, you may think that an object has a certain esthetic property for the normal person when in fact only you because of the peculiarity of your emotional apparatus can notice it. Or you may think that the property is a permanent one and not something due to the momentary condition you are in. Hence, you can make esthetic mistakes.

Artists are very prone to make esthetic mistakes that are akin to emotional hallucinations. Most artists tend to feel emotions as they produce their work; the novelist will feel what his characters are feeling, the artist will feel the emotions connected with the paint he is laying down, the composer the emotions in the music, and so on. They must do this, I would think, or the work will be just mechanical, following perceptive rules of logic rather than esthetic ones; the work has to "feel right" as it is progressing.

But of course, since they know what feeling needs to be produced by what they are doing at the moment, and since they are in fact feeling this at the moment, then it quite easy for the artist to put down something which does not in fact produce the emotion and think it does because he happens to feel it as he puts it down.

This is why Horace in the Ars Poetica tells the budding poet to put away his poem for nine years and then look at it. My view on this advice is that to be sure you haven't fallen into the error above, you have to leave the work alone long enough that when you pick it up to look it over, you don't remember what feelings you were trying to produce. If it "works" for you now, then it's a fair bet that you didn't read into it emotions that you happened to feel at the time, and it really does what you wanted it to do.

So truth and error are possible in esthetics, particularly in works of art. But even nature can be esthetically deceptive, like the peaceful little pond that you discover is actually full of quicksand; as soon as this happens, it becomes sinister. Something of that is also in the dewy spider's web in the morning, and there is even a kind of falseness about the bird's song, which sounds so sweet and friendly to us, when we learn that it is a scream to keep away from the territory it has staked out. It is the conflicting emotions here that make the esthetic fact ambiguous. Alfred Hitchcock's best films exploit this horror underlying what is everyday, like the cheap motel in Psycho.

But the question of esthetic truth brings up art. Art is not, as artists are fond of saying, what artists do as such, because artists themselves criticize each other's art. I remember an architect writing to the newspaper here in Cincinnati castigating the people for protesting the model of a new downtown building which apparently was designed to look as if scaffolding was never taken down--an interesting idea, you must admit, but not one that I personally would like to have to confront daily from my office window. He mentioned how people didn't understand architecture, and that was why we had so many horrible buildings downtown, and it was about time that we had one that was innovative and all the rest of it. I wondered as I read the letter who designed the buildings he derided. It must have been architects; so evidently being an architect doesn't automatically make your buildings great art. And of course painters criticize the output of their students as well as those who don't agree with their ideas about what art is. So artists agree that there is good and bad art, and "art" that doesn't deserve the name at all; it's just that they don't want the rest of us butting our noses in, any more than the scientists want laymen talking about limitations on research.

If my theory is true, then an artist is one who has understood a fact esthetically, and wants to share it by stating it to others. Hence, he produces a work of art, which is essentially an esthetic statement of the insight he has into the way the world actually is. It follows from this that the work actually has to say something, and presumably what he intended it to say. That is, it is conceivable that an artist can serendipitously produce a work that is significant but says something entirely different from what he intended, but it is unlikely in the extreme.

While we are on this subject there is the business of apes fooling around with paint and producing works of beauty. There is nothing surprising in this, because droplets of water suspended in air and driven by the winds can produce sunsets that take your breath away. Something understandable has been produced, but not because the one that produced it understood anything. We saw that fallacy in discussing direction and purpose in Chapter 4 of Section 3 of the second part 2.3.4. The painting by the ape might be a beautiful object, but it is not a work of art, any more than the sentence uttered by your parrot is a statement. A statement implies an intelligent source trying to communicate a concept to someone else who is intelligent.

But even supposing that, you can still make mistakes, as I said above, and communicate something different from what you intended, and have the work still art, or the statement still a statement. The statement is a material thing "thrown out" into the world by you: the material for grasping a relationship. As a material object, it has things so arranged that the parts have certain interrelationship among themselves and don't have other ones; and so it is objectively such that it will awaken certain concepts and not others. For instance, in a textbook I once wrote, there was a typographical error in a key place, and one of the definitions, where I had intended to say, "There must be no intention to harm the other person," the sentence read, "There must be intention to harm the other person," which meant, of course, the exact opposite of what I intended. I could not hide behind, "You know what I meant," because a teacher has to suppose that his students precisely don't know what he means if the teacher says the opposite of what he means. We discussed this in talking of the truth of language in Chapter 5 of Section 3 of the third part 3.3.5.

Esthetic statements are like any other kind of statement, then. They are true if they express what is in fact the case, and false if they don't. But if they are false, this can be due either to the artist's not understanding what is the case (as when a novelist creates characters that are too true to type to be real) or to his misstating what he intended to say (as when the emotions he had as he wrote induce him to put down something that doesn't produce that emotion, as we discussed earlier). Of course, the work can be true by accident, because the artist's misstatement of what he intended happened to say something else that was true. But the supposition in any work of art is, if this theory of esthetics is true, that it is a statement of a fact understood by the artist.

And that, of course, implies that the artist has something to say. One of the reasons that art students produce things that are derivative and not significant is that either they have nothing to say and are just fooling around with the language (paints are the language for the painter, and the rules of composition are the grammar; but it doesn't follow that putting paints on canvas and following the rules makes the resulting thing mean anything), or they are repeating statements which might be new to them, but are things everyone at all sophisticated in art already knows.

This is, of course, why paintings or music or poetry "of the school of" is regarded as bad. Generally speaking, what it is is repeating what the master has already said; and even if it repeats it very competently, nothing new has been added. If someone produces a huge plastic hamburger and puts it in a museum, the esthetic idea connected with it is the emotional incongruity of seeing an ordinary object in a new light, to be looked on as a work of art. Fine; it's an idea, if not the world's most profound. But when another puts a huge wooden hotdog in a museum, he hasn't said anything new; he's just uttered a synonym for what was said before.

In this sense, every work of art has to be something new; we don't like being told "Two and two are four." We are quite willing to admit that it's true, but we already know that. And when someone else comes along and says, "Three and three are six," we don't think of him as informing us of anything.

But that does not mean that the artist has to use some new technique or make a "radical breakthrough." That again is confusing the statement with the words. Thinking you've said something new by uttering "Dos y dos son quatro" is obviously silly.

And that is the tragedy of many artists, as it is of many scientists and of people in every field. Many many artists want desperately to say something, and even are extremely competent in how to say it--but just have nothing to say. Browning's Andrea del Sarto is a poem about someone (a painter) who has much better technique than his contemporaries but recognizes that they are better artists than he. And in the perceptive realm, Kant was horrible as a writer; but he's read still because in his halting way, he said something no one else could have said, and something that was enormously profound.

So the first thing the artist has to do is see something, understand something, and get a concept. And this, of course, is artistic inspiration. Somehow the "light goes on," and you know something you didn't know before.

Inspiration isn't really some visitation from the blue; all it is, really, is seeing a relationship (through your emotions, of course) that you didn't see before. It's the same thing that happens in the perceptive realm when a person gets a new idea. Frequently, as in the perceptive realm, it's a kind of hypothesis to the solution of some difficulty--even a technical one--that has come up in the field. You want to see how you can bring the background into the front of the painting and still leave it the background; you want to see how you can get around perspective's making a "hole" in the painting; the school you are in needs a march for the new football team--and could the main theme have the same rhythm and melodic line as the pronunciation of the school's name?

Very often the attempt to start something will suggest some relationship. You have a scene for a novel, or perhaps just a character you saw last week. What kind of situation would make that character do something significant?

The initial idea is often very vague, just a hint that there may be something there, and you're not even sure what the idea is and where to look for it in the suggestions that come before you. Many artists leave things like this in their heads, for quite a long time, coming back to them at odd moments, and mentally fitting in various things to see what this insight might develop into.

This is a good deal like the scientist's "observation" stage after he initially becomes curious. I don't want to overdo the analogy between art and science, but there is a parallel because they are both intellectual endeavors, and the difference between them is that science doesn't pay attention to emotions and for art the emotions are the basis of the idea.

After enough of a gestation period, you think that the concept you have is valid, and you have a fairly good idea of what it is you want to say and how you are to begin saying it. You would have a hypothesis to test, if you were a scientist.

Then you begin putting it down. This corresponds to the "experiment" stage in science, and it really is an experimental procedure. Some artists are able to put down the whole finished work as fast as they can work, with no changes--just as some scientists write down the results of their "thought experiments" very facilely. But most of the time, things are not going to turn out as you originally envisioned them; the first brushstroke of paint on the canvas is going to show that the sketch you put on there won't work; the shapes are going to have to be different now. And as soon as one color is there, the esthetic logic of what is already on the canvas is going to change what has to be done to keep your basic idea--or it will even suggest a different idea which you see as better than the one you started out with.

Thus, as soon as the artist begins work, there is a dialectic between him and his material; he wants to make it do what he wishes, but it wants to do what it wants. It happens, in fact, that the recalcitrance of the material can sometimes prevent one from putting down what he had in mind. Michelangelo himself started several sculptures that he gave up on; one famous half-finished pietá is displayed in Florence. Apparently he was going to smash it up, but his assistant told him not to break it but to give it to him. The assistant then carved the face of Mary Magdalene and realized that he was ruining it, and left it alone, unfinished--and even in its unfinished state, it says something extremely powerful.

Connected with this is what is called "respect for the medium." If you are to be its master, you must become its servant; you must recognize what its tendencies and limitations are, to cooperate with it as the two of you together produce the statement you are trying to make. This is especially true in art. If you try to make the material you are working with do something that is possible but unnatural for it, the strain of what you are doing to it will show in the finished work and add an emotional overtone--and it had better be that that emotional overtone of tension or strain fits into the work as a whole, or it will destroy it. For instance, Rouault's paintings with their huge blobs of paint (some of which will never dry) have the emotional overtone connected with the "misuse" of the paint; but this is fitting, for instance, in his famous painting of the face of the crucified Jesus, because part of the feeling of sorrow and violation you get is that the very paint is suffering.

There are analogies with all of this in the perceptive realm. As I began to write this book, I of course had a pretty good idea of what I wanted to say; but as it went along, the logic of what I was saying suggested other ideas, and to fit them in properly I had to go back and revise and rewrite, and even leave out some things I had written before, because they no longer fit. Occasionally, I would find the phrase that exactly expressed what I wanted to say, only to discover, in reading over what I had written, that I had used the key word a sentence or two above it, and robbed it of its force. One or the other would have to be changed, or the work as a whole would fail. In the same way, the artist's idea is very seldom completely formed in the period of gestation before he actually sets to work; and the work changes it.

Beethoven is a beautiful example of this. His compositions sound so spontaneous and "right," but if you look at his three Leonore overtures, you can see how he kept revising and revising--until finally he didn't use any of them for his opera, and we now hear what is now called Fidelio with its own completely different overture. So the fact that you have to go back and rethink what you are doing and erase and redo is no indication that you aren't a genius.

And this brings up the subject of genius again; only this time, let us look at it in terms of "creativity." I said that the artistic inspiration is getting a new idea; but it doesn't have to be startlingly new. Unfortunately, much that goes on by way of "encouraging creativity" does nothing more than encourage randomness. Little children, for instance, want to draw things that look like something, and are not really imitating Paul Klee (who is no child, by any means); they of course have very active imaginations, and can pretend that what they have drawn looks like what they intended to draw.

It is not necessarily good for them to put no restraints on what they are doing and to refuse to guide them in the direction of where they want to go (to show them how what they are doing can be more realistic, for instance). An artist is not someone who spills out what is inside him; he is someone who submits to the facts outside him. His "creativity" comes from seeing something objective, not from unrestrained subjectivity.

The genius-type is going to go his own way ultimately; but his early life is most probably going to be helped by learning discipline and submission to the restraints imposed by his materials and the facts. After all, there have been geniuses for thousands of years, making thousands of mistakes; and if he starts from scratch, he's apt to make the same mistakes others have made before him. But if he is taught what went before, he has a body of knowledge that he can build on, and will be able actually to advance the world's wisdom a step or two because of the new approach.

I personally don't think that a genius is the kind of person who won't listen; geniuses listen. It's just that they listen to what's inside them as well as what's outside. If you tune out what's outside in the name of "encouraging creativity," you deprive them of information they need to check what's inside so that it doesn't become just eccentricity. I myself was quite docile as I was going through my education, though on the side I was experimenting with some of the ideas I came up with. I realize now that I give the impression of never listening to anyone; but it's not true. Like all genius-types, I regard (especially now that I'm getting older) new information as a kind of a threat, because it means that I might have to change, and I find change daunting. But it isn't that I don't listen to it, or that I don't want new information, even information that would be evidence that I am mistaken.

But the point is that the traditional, very rigid education I had was anything but a hindrance to me; I am in fact extremely grateful for it. I studied philosophy by learning "theses," statements of the position we were to take, the objections against the thesis, the people who held the objections, the argument (in strict syllogistic form) that proved the thesis, and finally the answers to all the objections. There's no more cut-and-dried way of learning anything than this; and many who studied philosophy with me rail against what they were subjected to.

But I found it the quickest and most efficient way to get the information I needed; and it was later, when I began examining the arguments and reading the much looser writings of the philosophers themselves, I had a framework that I could view things from, and see to the heart of what they were saying more easily than I could have without that training. When I then went off in a new direction, I knew what I was leaving, and more importantly why I was leaving it. If you don't know grammar, you don't know when it is better to break the rules; if you split an infinitive, you do so haphazardly, not because it is better in the context to split it than to slavishly keep to what is grammatical but less forceful. Similarly, if you don't know the rules of an art form, you will break them, but you won't know why.

You will notice that Wagner in Die Meistersinger has the musical genius Walther Von Stolzing taught restraint and form by Hans Sachs. People who see the opera notice that Walther's spontaneous melodies that don't fit the rules are miles above Beckmesser's pedantic efforts; but Walther would have been nowhere in the song contest without Sach's coaching. So Wagner was by no means "encouraging creativity" in the sense of letting the genius go his unrestrained way; it was actually Sachs who understood what was behind Walther's first song which was such a failure--something Walther himself did not understand--and who developed it into something meaningful.

You can, of course, stifle genius by teaching art (or anything else) as if what you are teaching is "the truth the whole truth and nothing but the truth," and making every deviation from it or every questioning of it tantamount to sin. This is not the same as teaching what is known up to the present as the truth, but not necessarily the whole truth and not necessarily nothing but the truth. What we now know (and this goes for anyone in any age) should be taught as something that has to be learned before you can build on it.

But in general, not even this will stifle the true creative person, because he'll catch on to what you're doing and will (even if he obeys) hold it in suspicion. Most of the "creativity" that was stifled by actually teaching kids something wasn't, I suspect, there in the first place. I don't see, after generations of this "encouraging creativity" any remarkable burgeoning of new insights; what I see is a lot of desultory silliness.

Most people aren't creative, and this is not to be deplored; we don't need to be changing direction every five years; we need time to dig out the implications of the "breakthrough" insights and digest them, and fit them in with the accepted wisdom of the past. What was past does not automatically become repudiated by the new departure; very often the new departure only negates one focus on what was known; and both points of view have to be recognized in their validity in order to push the frontier of ignorance back still farther in the future.

So there is a place, and a very important one, for the non-creative person; and this goes for the artist as well as the perceptive thinker. Just as in the realm of science, there are manuals and textbooks that need to be written clearly and succinctly by those who understand thoroughly what is known but aren't making any new discoveries themselves, so in all forms of art there are illustrations that have to be made, music that has to be added to things, carvings to be done, television plays and commercials to be written, and so on and so on; and in such things dramatic new insights that force people to rethink the foundations of what they'd accepted as true in art are not only not helpful, they get in the way of what needs to be said esthetically.

So if you aren't creative, you have nothing to worry about. In the first place, you're probably better adjusted than a genius, as I said in discussing abstraction in Chapter 4 of Section 3 of the third part 3.3.4; and it by no means implies that you should stay away from the arts, any more than not being an Einstein means that you should stay away from science if you are inclined that way. There is plenty to do that needs competent, well-trained artists who understand what they are doing, but don't have the ability or the inclination to take a completely new focus on things.

You will still, of course, go through the process I mentioned above, but will still be within the rules, and so you will know pretty well where you are going and how to get there. And when you do reach the end, you will have said something meaningful, if not new. There are times when "Two and Two are four" needs to be said; and there are certainly much more complex ideas that aren't new that can stand being repeated.

But to return to the artistic process, the artist has got to play judge as well as artist all during the time of producing the work; and here is where what I mentioned earlier about feeling emotions comes into play. An artist has to train himself to be able to feel things but simultaneously to detach himself from what he is feeling, so that he can let the work produce the feeling in him that it actually produces in "the normal human being" (because he's making a statement to others, remember), and not project onto it the emotions he happens to have.

This takes a lot of practice; but it is why artists like Mozart could produce joyous music when they were in the middle of deep depression. They were detached from their own emotions, however strongly they felt them.

One way you can tell whether you are projecting what you want onto the work rather than getting it from the work is the need to justify to yourself what you are doing. That is, if you start saying to yourself, "That patch of red has to be there to balance the ocher on the other side," or "The character had to do this at this time to show that he wasn't typical," then you can be pretty sure your receptivity is telling you that there's something wrong with what you're trying to convince yourself is right.

When you give reasons to justify something in a work of art, they tend to be based on established rules, or on perceptive logic; and when the work is following its own emotional logic, you just "know" that things fit (they "feel right,"), and there is no need to justify them. You might afterwards (or even at the time) be able to give reasons why they are where they are; but the reasons are really irrelevant. Only academics care about such things; people who are looking or listening just get affected by it.

It takes a lot of practice, once again, to get into the state where something's "feeling right" isn't just your creative satisfaction with actually having got something down on paper or canvas. Some people, like Mozart, seem to be born with the ability to do this, just as some people can play the piano by ear; but it's evidently quite hard to acquire, as witness the junk that some people turn out along with their masterpieces. There are a number of plays that Shakespeare wrote that only the most avid historian would want to see, because they just aren't very good. "Even Homer drops off to sleep now and then," says Horace.

One of the most important things the artist as judge has to do is learn to see the work as a whole and to sacrifice parts that, however good they are in themselves, don't contribute to it. In both the esthetic and perceptive realms, this is very hard to do, especially when the parts are very good but won't stand on their own. Horace remarks in the Ars Poetica about the sculptor who could make absolutely perfect fingernails, but never produced a decent statue; and he was the one that coined the phrase about the "purple patch" sewn onto a garment that made the whole thing ridiculous rather than beautiful. There's a lot of good, sane advice in that poem still today.

But let us now assume that you have got your work "in shape," as they say, and found that for you it says what you now want it to say, whatever you might have wanted it to say in the beginning. Let us take a look at it.

You have expressed yourself. This is truer in art than it is in the perceptive realm, because the work not only talks about the fact "out there," it talks about the human being as an emotional being; and so naturally it is going to talk about what you have seen emotionally, and how you saw it.

But it isn't just expression--the productive analogate to Aristotle's catharsis, a kind of emotional eruction. You aren't just expressing yourself, you are first and foremost expressing a fact about the world. You as emotional--sorry, but let's face it--are of supreme uninterest to anyone else, as you can tell from your own reaction to those who tell you all about how they feel about everything, and have, as they say, "I trouble." No, but if your emotional reaction allowed you to discover something true about the world, and insofar as your emotional reaction is transferable to the person who sees the work, you haven't just expressed yourself or expressed a fact, you have communicated this fact to someone--and at the same time established a solidarity between you and this someone as emoters, because he understands what you were trying to say.

This, I submit, is why artists suffer to do their thing; they have something to say, and how they are straitened until it be accomplished. Artists won't grovel and beg for acceptance; they are convinced that what they have to say is true; but they deeply and sincerely want other people to understand it. Not understand them, exactly, but understand what it is that they are trying to say.

But of course, the artist, especially the genius-type of artist who has something new and different to say, always wonders whether he has said something valid, or whether, like Shakespeare writing the bad plays, he is just putting down junk that only seems to mean something because of his desire to say something. There is a kind of depression (which can often be very severe) following completion of a work, where you say, "Well, there it is. Now what do I do with it?" You've said what you have to say; now will anyone listen? Why should they? How much do you listen to someone who comes along and plucks at your sleeve?

This again is not confined to artists. As I write this book, I have the same expectation that nobody is going to be interested in reading it; and I am sure when I finish it that the letdown is going to be overwhelming. Then why am I writing it? Because I know--at least I believe--that I have something to say, something that deserves, even desperately needs to be said and heard, and if I don't say it, then, because of the weird nature of my mind, it simply will not be said. I don't know that it is true, though I am convinced that there's a good deal of truth in it; but it goes directly against what is the accepted wisdom of my age--and in fact a lot of it goes again what is the accepted wisdom of every age before me--and who am I to say that I am right and all these brilliant people are wrong? Nobody. Or somebody who looks at things from a really strange point of view.

And of course, I'm lucky, since if I am right, then you are in fact reading this now, you and so many others like you (all of whom I am watching as you do so), and the ideas--the valid ones, at least--are spreading over the world, just as Mozart's music and Van Gogh's paintings and Rodin's sculptures and all the other works of art and science that nobody paid attention to until the perpetrators died and started shaking up the cosmos. And if I'm wrong and these are just words, never read by anyone, then they don't deserve to be read; and that would satisfy me too (but of course if I'm wrong there's nothing to be satisfied).

Don't expect encouragement, if you're a genius. People will be very concerned to help you not get a swelled head; but that's not really a problem for you, if you're serious. You need someone who knows something to tell you that you've got something there; but if you find someone like that, you have a treasure beyond rubies. Generally speaking, your triumphs will be accepted as a matter of course, "because after all, he's brilliant and everyone knows it, especially himself," and your failures will be called to your attention, just to make sure that you don't get too conceited.

And you will fail. Look at Michelangelo. If you are an artist, especially a genius-type artist, you are bound to fail, because what you produce won't be what you wanted to say. You have to say it materially; but what you understand is spiritual. You have to communicate with someone else; but you can't transfer your concept or even your emotion directly; you have to do something to wake the other person's emotional apparatus up to the very subtle combination of emotions that you need for him to see your concept. And you won't be able to do it as well as you want.

Artists, when asked what they were "trying to say" with their works, are apt testily to respond, "It says what it says; look at it." And of course it does. They know what they intended to say; but if you can't see it, there's nothing they can do to tell you how to look at it; they tried to express themselves as well as they could, and your question simply tells them that in your case, they failed. No wonder they aren't happy with your question. Browning, I think it was, is said to have replied to someone who asked what one of his poems meant, "Madam, when I wrote that, only God and I knew what I meant. Now only God knows."

And, of course, the artist will find that people will get different ideas from his statement than what he intended to say. There is nothing unusual in that. People, even intelligent ones, who have read some of this book already, have interpreted it as saying something totally foreign to what I was at enormous pains to say as clearly as I could. The statement is an object in its own right and it has its own meaning, which may or may not be what you intended it to have.

But this does not mean that we have to grovel at the feet of Derrida and deconstruct everything. Even if we can't express exactly what we intended to say in such a way that it has that and only that meaning once expressed, still, any complicated kind of expression will have only a few meanings that make sense, and all of them will be clustered around the basic meaning. So communication in both the perceptive and the esthetic realm is fuzzy but not hopeless. Otherwise, how could Derrida get across his idea that texts should be deconstructed?

My wife at the moment is struggling with Plato's view on women, and the various authors who have commented on what he said. She keeps coming over to me and saying, "Another one who hasn't read the text!" and quoting some opinion that fits with a few texts but conveniently ignores other places where Plato says the opposite, or which accuse Plato of contradicting himself and not knowing what he is talking about because they say his view on women is the opposite of what he says it is. And so on. Most of these views are only sound if Plato was an idiot; but when you judge Plato, as when in the esthetic realm you judge Beethoven, you aren't judging Plato, Plato is judging you.

And as time goes on and the mind-set of people changes, then the innovative artist who has something to say will be understood, while the mere iconoclast will fall by the wayside and be trampled on by history.

But since art is communication, it expresses something from one person to another person. And as expression of a person, an individual, it is bound to have a style. Each of us has his own way of organizing and arranging data, and the personal quirks will show up in the finished product.

Some artists try to cultivate a style; and of course, to a certain extent, some attention to how you are saying things is laudable. But an artist should not be so enamored of having people know who is speaking that it becomes even of equal importance with what is being said. The only really important thing is what is being said; and if you work to say this as clearly and precisely and forcefully and appropriately as possible--if you subordinate yourself to your statement as well as your medium, as I spoke of earlier--you will find that you have acquired a style without bothering to acquire one. And that is the only genuine style.

"Mannered art" is, of course, something in which how the statement is being made seems of more importance than what is being said. We have the same thing in the perceptive realm in jargon-filled "scientific" papers, or in political speeches. One of the reasons politicians sound so insincere is that it is so obvious that they are concerned with how what they are saying sounds that you get convinced that they don't care about the contents of what they are saying. It is all "image," not content. President Reagan was accused by the news media (who hated him) of being "the great communicator" and putting style above substance; but I heard him, and he came across to me and to most of the American people as actually believing what he said, and be damned to frills and furbelows. He projected sincerity.

People say, "Well yes, but he's an actor, after all." Let me clue you in on something about actors. I think an actor finds it harder to cover up a lie than an ordinary person. Actors can't just "put on" a part like a suit of clothes, and produce little technical tricks that convey the right feeling. This is an actor writing this, by the way. No, an actor has to "put on" the feeling, and then be sincere about it; he is a person who has the capacity to "get inside" some other person's skin, and live that other person's life, understand the logic of that person's behavior; and once he does this, he just expresses what is true.(1)

An actor, like every artist, is not a falsifier; he is a truth-teller. I once got into a bit of a tiff with a director, because I was playing a drunk who had been traveling by bus for months, and he wanted me to have a silver flask instead of a whisky bottle. I told him, "But if I had this, it would have been stolen from me years ago; and I couldn't be bothered decanting my drink into this thing. It just doesn't make sense." He wanted it, nonetheless, and I finally told him, "All right, but if I do this scene the way you want, it'll be George Blair obeying orders, not 'Gerald Lyman' taking a drink." We finally worked out a compromise.

This is the famous "artistic temperament." Artists are emotional, of course, and performing artists are trained to express their emotions. But it's not just a question of tantrums; it's a question of honesty. An artist understands something, and understands something true. If that is contradicted, then obviously the whole enterprise is a waste of time, unless he can be shown that the other point of view is esthetically just as valid. Why suffer to say something false?

But can art be false? Yes indeed. This is what artists are talking about when they refer to "prostitution of one's art," by telling the people what they want to hear, not telling them what is true. Those statues of saints in so many Catholic churches are either lies or mistakes; anyone who loves God can't be that indifferent to his world and the people on it; and anyway, love of God is anything but wallowing in soupy emotionalism; it is hard suffering, trying, as I said in the section on mysticism, to relate to God without any emotion connected with it, and to act in the middle of a feeling of total abandonment.

True, sculptors commissioned to make statues of saints are not necessarily apt to understand this; and so they take the standard view of sanctity, which is repulsive, and turn the saints into pagan gods and goddesses for the unsuspecting to worship, instead of showing them as heroes for people to imitate.

Liturgical music can also be a lie. Here at the point where Catholics believe Jesus the Lord and Master becomes physically present and when his crucifixion is brought into the front of the church, the most solemn and horrible but sublime moment in the whole creation of the universe is introduced with guitars, tambourines, and musical doggerel. And during communion, we used to sing, Like a Bridge over Troubled Waters I Will Lay me Down, filling the fact that we are all cells in one body with sexual overtones.

Let us not call them lies; there are objective esthetic misstatements: things that contradict the facts about what they are trying to talk about. I suspect that Mr. Mapplethorpe's photographs that I spoke of earlier are of this type, because in fact what the people are shown doing to each other is violating each other, whatever they might think they are doing; and this sort of thing could be depicted in such a way that it is shown esthetically to be a violation. If it isn't, then at best it's as much of a false statement as someone's saying with total sincerity and conviction that the earth is flat. No matter how eloquently he pleads his case, the earth is still round.

So art is bad when it contradicts what the facts are: that is, what the emotional relationship is in "the normal human being." It is also bad when it contradicts itself, as when there is a part of it that says esthetically one thing and another part which esthetically says the opposite. I mentioned the poem Lift Her Up Tenderly in this connection earlier.

Art can also be bad when it mistakes the emotion itself or the evocation of emotion for making a statement. This is "sentimental" art, although the emotion can be of any kind. Joyce Kilmer's Trees has been frequently used as an example of bad art of this type. "Poems are made by fools like me,/ But only God can make a tree." Really? Are poems esthetic trifles and are trees that much more beautiful? But the "humility" here (not to mention the hint at arrogance at calling himself a poet) and the devotion to God, who was hinted at earlier by the tree's "lift[ing] her leafy arms to pray," while she has her mouth down at the earth's "sweet loving breast," and a "nest of robins in her hair." (Picture that, if you can.) What he's doing is dragging in images for their emotional impact, not that they go together with any kind of esthetic logic. The unsuspecting come away from the poem feeling good about themselves and the world and Kilmer and are apt to bristle when you tell them that it's just no good. It's like one of James Michael Curley's speeches; gorgeous to listen to, but saying nothing at all.

In any case where the emotion is too strong to support the concept (if any), the art is sentimental. Art can be intense, even overwhelmingly so, as I said; but it's sentimental if the idea expressed is trivial and the emotions are enormous. As Horace said again, "Mountains go into labor, and what is born is a ridiculous mouse." And the reason sentimental art is bad art is that art is essentially an intellectual experience that uses the emotions, not an emotional experience.

As far as what the artist is saying is concerned, it should be pointed out that he is not necessarily just talking about something the work is referring to (as I said, some works are just internally complex and don't have any external referent at all); but some art talks about the artistic process itself. I think that Paul Klee's works, for instance (the things that superficially look like child's drawings), are talking about the different way artists see things from the way most people do. They are very sophisticated, actually, and quite complex; it is only at a very surface level that they are childish. Jackson Pollock, with his "drip" paintings was conveying something of the emotionality of the artist, because the work gives the impression that paint was just flung on the canvas (as in a sense it was); but he actually took considerable pains on where he put things and what part of the original thing lying on the floor he cut out to be hung. So there was a good deal of understanding underneath the apparent abandon of all restraint. Piet Mondrian, with his calculated squares and circles, talks about the opposite side of the artistic process: the calculation. But his works are not simply mechanical; they have esthetic logic to them, not just pattern.

One final remark. There is a difference between art and rhetoric, and it is analogous to the difference in the perceptive realm between science and engineering. Rhetoric is esthetic engineering, or applied esthetics.

Rhetoric is the use of esthetically understood facts to lead people to action.

This is true rhetoric. Of course, the idea is that emotions of themselves incline people toward acting, and give a person deliberating a reason for choosing an act; and if the person understands a fact through the emotions that makes an act desirable, he is much more likely to perform the act than if he understands it with no emotional backing to it. Philosophy is a very bad motivator, because it does not engage the emotions, however reasonable it makes actions appear (as, for example, in ethics).

Of course, there are abuses of rhetoric just as there are abuses of art, of science, and of technology. The main abuse of rhetoric is demagoguery, in which the speaker either tells esthetic lies or doesn't bother to do anything except inflame emotions to arouse people to action on his behalf. Mobs, with people's shared emotions reinforcing each other, and with a reduced sense of personal responsibility because of the social pressure of the others, are particularly susceptible to this sort of manipulation.

But of course, this same sort of chicanery goes on in advertising, which is modern-day rhetoric. Pictures and music are used to enhance the emotional effect of the words, which are basically esthetic statements, not perceptive ones. The object is to make the person think esthetically that he is deprived and somehow dehumanized if he doesn't have the product in question.

This is not to say that advertising as such is fraudulent. Information, after all, can be esthetic as well as perceptive; and if, say, certain clothes make you look attractive, there is no falsehood in picturing them with the appropriate emotional overtones to the picture. Anti-drug or anti-smoking advertisements are not lying if they picture the addict or smoker in disgraceful or unpleasant circumstances; but here again, picturing someone going crazy after smoking one joint is a lie, and as the film Reefer Madness shows, when people catch on to this, it is funny, and in fact has the exact opposite effect from what was intended.

The point is that rhetoric definitely has its uses, just as engineering and technology do. But just as technology is not science (which is interested in facts, and not what you can do with them), so rhetoric is not art.

That is why didactic poetry or "art" that pretends to be art and is really rhetoric fails. It may not fail as rhetoric, as witness Uncle Tom's Cabin, whose author Lincoln is said to have greeted with "So you're the little lady who started this big war." But it fails as art, because art as such simply provides information, and is not intended to lead toward action. And insofar as the person who is approaching a work of art is interested in learning something and finds that he is being exhorted to do something, he tends to resent this, and his resentment interferes with the esthetic effect. Plays, for instance, which demand audience participation are a violation, I think, of the artist-viewer relationship. The artist has something to say; the viewer wants to hear it, not contribute to it, because he recognizes himself as ignorant in these matters, or why would he be here?

But let us leave it to art critics and students of art to discuss the subject further. I think I have said enough to show basically what is going on in art, and to make out a fairly good case that it is both emotional and intellectual and in fact does tell us something about the real world as well as about ourselves as human beings.


Notes

1. Of course, there are tricks, known by the actor, and they work; and con-men can exploit these tricks to their advantage. The supreme example of this, perhaps, was President Clinton, who convinced millions of people by his "sincerity."



Section 6

Humor


Chapter 1

Is humor just nastiness with a smile?

I don't intend to say a great deal about humor, but it should be treated to see how it fits in to a scheme of knowledge. I mentioned my basic idea in passing in the treatment of apparently contradictory situations in Chapter1 of Section 2 of the first part 1.2.1, where I said that it was the acceptance of apparently contradictory situations, and distinguished "funny-ha-ha" from "funny-peculiar."

For centuries people have recognized that what is funny can sometimes be really horrible; and yet up until quite recently people have regarded the ability to laugh and to see the humor in things as a sign of a "healthy mind." It has also been held to be a sign of a rather high intelligence. Perhaps this could sum up the feeling about humor that everyone has: "If you laugh when I don't laugh, you're silly; if you don't laugh when I laugh, you have no sense of humor; if you laugh when I laugh, you're brainy and an all-round nice guy."

Recently, however, with Freud's view of humor and its offshoots, there has been a rather sinister cast put on laughing at things. It is regarded in these views as a kind of reinforcing of our superiority over what we are laughing at, and is supposed to be pleasing because we bolster our self-esteem by putting ourselves on a higher plane than what is ridiculous.

To me, this makes no sense. In the first place, why would anyone then tell jokes about himself--or for that matter, why would any comedian get up in front of an audience and sweat and toil to make the people out there despise him? I think of someone with the ego of Lou Costello or Jackie Gleason, the latter obviously (from some of his later films) capable of superb performances in serious roles, making clowns of themselves to get a laugh. It doesn't wash.

And then, what about puns? They're clearly funny, even though the accepted response is to groan at them; and what is it that I feel superior to when I laugh at encountering an unexpected word? Or why do we want someone to share our laughter if it is a sign of how much better we are than the rest of the world? No, that view of humor would itself be ludicrous if it weren't taken so seriously.



Chapter 2

What is humor?

Then what is it that makes us see things as funny? What I hinted at above can be expressed by the following definition:

Humor is the understanding that some fact about the world doesn't make sense, together with a refusal either to treat it as a problem or to evaluate it.

That is, if you confront something that contradicts your expectations, you have three possible attitudes to it: you can consider it an effect, and try to find a solution for the problem (in which case you are basically in the scientific mode of thinking), or you can consider it as bad and either complain about it or set about correcting it (in which case you are in the evaluative mode of thinking)--or, finally, you can simply accept it as a fact, in which case you laugh at it.

And this, of course, is why comedians want to be laughed at. They see that the way the world really is is contrary to the way we expect the world to be; but they also see that there is a great deal of this that doesn't threaten us, and they want to show people the way the world is in such a way that it is still somewhere you could want to live. In a sense, it is the comedians who perform the "catharsis" of bad situations, rather than the tragedians. Tragedy shows that evil can make sense in one way or another; comedy shows that it doesn't have to make sense.

Why is this a "healthy" attitude of mind? As far as I can see, only my philosophical position on values can make sense of it. Whenever we see something as bad, we are, as I said earlier and am going to spell out in more detail in the next section, comparing it to our preconceived ideal of the way the world "ought" to be. Humor, however, tells us to accept the world as it is, rather than either trying to make it over into what we would like it to be or complaining that we weren't allowed to be its creator. Humor even acts as a check on the scientific attitude, which greets everything contrary to expectations as a puzzle to be solved. It shows that, however puzzling the fact may be, it is still a fact, and a fact is a fact; there is no ontological demand that we find a way to satisfy our reason before we will accept it.

This is not to say that humor is the "only real" attitude to take to the world, that what we should do is practice a passive kind of "conformity to the will of God," and simply accept everything and laugh at it without ever trying to improve situations.

In fact, it can be immoral to take this attitude in certain situations. If someone is injured and you can do something about it and all you do is sit back and notice how ridiculous he looks running around carrying his severed arm, this is hardly the sign of a "healthy mind." In refusing to prevent the act (supposing that you could have done so), you have cooperated with it, and morally speaking this is the same as committing it yourself. Even if the act was not preventable by you (it happened in an accident with some machine he was using), then by sitting back and enjoying his situation (certain aspects of which are incongruous), you are refusing help and at the very least the sympathy which he deserves in his dehumanized condition; you have by laughing at him proclaimed yourself superior to the rest of mankind, which is a lie--not to mention the fact that you are killing your ability to sympathize with others, which is the most noble aspect of yourself.



Chapter 3

Types of humor; satire

And of course, this attitude is what makes "sick" humor sick, and inhuman. Interestingly, it is what some moderns think is the basis of all humor. Humor supposes a detached attitude toward the situation; but there are situations where deliberate detachment is immoral by omission; and so not all funny situations are such that the humor in them can be morally recognized.

--Except, of course, by the person to whom the harm happens himself. Last week I went to my locker after my workout and found no lock on the door. Thinking, "I'm really getting to be the absent-minded professor; I didn't even lock my locker," I opened it and--just in case--felt in my pants pocket for the wallet that was no longer there; and then noticed that the lock was not where it would have been if I hadn't locked the locker. If I had had a sense of humor, I could have thought of what the expression on my face was like; as it was, I cursed myself for being an idiot and bringing my wallet to the gym, knowing that there had been break-ins.

The point I am making is that there would have been nothing wrong in my laughing at how stupid I was; but the other people in the gym couldn't have done so morally. The line between where humor "at the expense of someone else" is permitted and where it is immoral is more or less at the same place as I will talk about in the next section when I make the distinction between values and necessities: when damage, either physical or mental, is done to the person (so that he cannot do what he could normally do, especially what he could be expected because of his genetic potential to do). In that case, you would be enjoying the dehumanization of another human being, which is inconsistent with your being human yourself. This is particularly true, of course, if, as I said above, you are by your inaction refusing to prevent or cure this dehumanization; your laughing at the incongruous aspect of it adds insult to the injury.

Most practical jokes, therefore, should not be regarded as funny, because most are at least humiliating to the person on whom they are played, and no one ever has a right to humiliate anyone but himself. Being a "good sport" simply means allowing one's rights to be violated and not complaining about it because everyone around you is laughing. This goes for laughing at the "cute" antics of children and filling them with embarrassment. Incongruous cruelty is still cruelty, and no human being should enjoy it.(1)

As far as practical jokes are concerned, however, there is one type that is legitimate: the kind that Jesus was fond of playing, where what is unexpected is a benefit. For instance, he came up behind Mary Magdalene weeping at the tomb and asked (as if he didn't know her) what she was crying about; at which she replied, thinking he was the caretaker, "Oh, sir, if you've taken him from here, tell me where you've put him and let me have him!" at which he said "Mary," undoubtedly amid gales of laughter. Shortly afterward, he walked seven miles with two of his students, talking earnestly about himself without letting them know it was he until the end of the journey. And so on.

In certain contexts, laughing at harm to others can be legitimate. When such things are presented on the stage, then as I said in the preceding chapter, we know that no real injury is being done, and we can focus simply on the irrationality of what is going on. A woman in a velvet gown comes into a room, puts her hand on the grand piano, says in a soulful voice, "John," and gets a pie in the face--and stands there, with a surprised look, letting the whipped cream drip all over her clothes. We laugh (at least I did), because we know we don't have to consider the insult, not to mention the damage to what she is wearing. If she had said, "Now what did you do that for? You've ruined a thousand-dollar dress!" and began to cry, it wouldn't have been funny, because then we would be empathizing with her.

It is the fact of not being able to empathize that makes overdone tragedy funny. This is what is sometimes called "camp": a film that was intended seriously, but which overstates its case. King Kong is one of these: the story of the two-story-high gorilla that fell in love with a woman he could hold in the palm of his hand and got killed climbing the Empire State Building to find her. I kept wondering how he expected to express his affection to this ant-sized female; and I can't imagine anyone over five years old being frightened or taking the last line, "Well, it looks like beauty killed the beast" with anything but guffaws--but apparently the original audience did it.

Even more, we feel no qualms at all about laughing at total mayhem committed on cartoon characters. Since they can fall from twenty stories and have a safe fall on top of them and then get up and walk around as flat versions of themselves and in the next frame be back to normal, there is no hint of damage's actually being done, and we don't have to concern ourselves with anything but with the poetic justice of how the ingenious attempt to murder the other character backfires. Violence in these films is not violence, because it isn't perceived as such, but just as incongruity. I once saw a film called Teenage Mutant Ninja Turtles which wasn't a cartoon but had actors dressed up as the turtles going around with staffs and chucka sticks and perpetrating law and order on gangs of outlaws in what would be the most gruesome fashion if it were not so unreal. It is being decried by some as too violent; but it is hard to consider something serious where the turtles call to each other for "high five" handclasps as "Gimme three!" since they only have three fingers. It had the same flavor as a cartoon, and was a lot of fun with, I must say, beautifully choreographed fights.

Puns are funny because the substitution of the inappropriate word changes the meaning of the sentence it is used in; and they are funniest when the meaning is also unexpectedly true. A person, for instance, getting up after a restless night is described as "bed-raggled." They can also be funny because they bring in allusions: "As one monkey said to another, 'Am I my keeper's brother?'" brings in the episode of Cain from the Bible together with the theory of evolution, and--depending on the tone of voice you say it in, or the context--it can also indicate a reversal of the anti-evolutionists' attitude toward being related to monkeys.

Sometimes the humor is just in the sound, as in certain spoonerisms. "Mardon me, Padam, but you're occupewing my pie," allegedly said the Reverend Spooner himself--and the humor is enhanced by the fact that it almost seems to mean something in its transmogrified form, as does the sentence which is said to have followed it, "Can I sew you to another sheet?"

Sexual jokes are funny because they treat lightly something we instinctively know is very serious; and the same sort of thing goes for religious or political humor, when it is just humor and not satire. Sexual jokes in mixed company are apt not to be funny precisely because the other sex is there, and the overtones of exploitation or cruelty become apparent in addition to the incongruity. But I am inclined to think that certain sexual jokes (that is, those that aren't "sick" and aren't a cover for enjoying cruelty) are good, because they put into perspective something that is apt, because of the strength of the urge, to put itself forward as the only real reason for living. So the fact that men tell sexual jokes among themselves is really no indication that all men are at heart rapists; it is a way of defusing the bomb that the sexual urge in the male can be.

Religious jokes do, as I said, the same thing, as do ethnic jokes. But in these two categories they are only funny when they are told between believers or between members of the same ethnic group. Jewish jokes told by Jews to Jews are funny; Jewish jokes told by Gentiles aren't, because they involve put-downs of Jews because they are Jews. Similarly, facetious remarks about the Catholic liturgy among Catholics are funny, because even that most solemn of all acts has its incongruous modes, and these are facts. But when non-Catholics point them out, then Catholics--often rightly--suspect that they don't see the basic seriousness, truth, and value of the liturgy as a whole, and they resent someone saying something that appears to make the whole enterprise stupid.

What is called "wit" is the ability to see the incongruous in something and point it out so that the person is surprised into a new realization. It has the danger, since what is incongruous is also what is bad, of being clever nastiness. I remember one time a student of mine was remarking between classes about a jump suit she was wearing that everyone thought was pajamas. I mentioned during the class that followed that my daughter, who at that time was supported by me, could spend all the money she earned in her job at Saks Fifth Avenue in buying clothes. "Just like mine!" the woman remarked, and I said, with a smile on my face, "Oh, no! She has better taste than that!" She said nothing until everyone was filing out of the room, and then in tears accused me of making slighting remarks--and by that time, I had forgotten what I said, and had to worm it out of her, after which I apologized both to her and publicly to the class. I have no concept of what is "in good taste" in women's clothes, and was simply trying to be clever. I am only in training to be a wit, and have not got more than halfway there.

The reason, of course, why jokes are not funny the second time you hear them is that you already know the fact that they are pointing out; you have no expectations that the reality of the world dashes. But the reason why it is enjoyable to tell the same joke over and over again is that you know something about the world that you want to share with others.

People can even share a joke and keep telling it to each other, relishing the fact that the two of them know about how crazy the world is. In this, humor shared creates a sense of solidarity, like that of the appreciation of the same type of art; we know that we like the kind of person who can see humor in the type of situation we see humor in, because he is like us; just as the type of person who appreciates the music we like is like us too.

But just as with esthetics, what is underneath humor is that we recognize that what it is saying is true. The world is in fact insane in the way the joke or the humorist shows it to be. If we don't believe this, we don't find the jokes funny, but just silly. This is one reason why ethnic jokes told by someone outside the group are not funny; because they are seen as reflecting on the group as a whole, and the "truth" conveyed is not that the group is no worse than other groups, but that it belongs on a lower level than "real" human beings, which is of course false. But when told by one within the group, the simple inconsistency of behavior, say, is what is asserted.

Finally, a remark about satire, which is to humor as rhetoric is to esthetics. Satire starts out with humor, making the person laugh at some situation that is contrary to expectations; but then it shifts the ground and makes the reasonable situation appear to be the one that ought to exist, in which case the unreasonable facts then seem evil, and something that must be corrected or done away with.

Jonathan Swift is the quintessential satirist. In his "Modest Proposal," he starts out by suggesting as a solution to the hunger in Ireland that the Irish cook their infants for dinner, thus feeding the family and solving the population problem. He treats this in a matter-of-fact way, and his treatment of it is funny after the manner of religious jokes (that is, it is an outrage taken as a kind of matter of course, and so you don't think he is serious)--until the end of the essay, in which he says how much more reasonable his proposal is than the unthinkable abolition of absentee landlordism. It is a superb piece of rhetoric, because the reader has been going along with the gruesome proposal to see how horrible it can get, and yet realizing how much sense it makes in a perverted sort of way, and then is confronted with the real solution to the problem of starvation in Ireland, as the only other alternative, and one which costs no suffering at all. And then there is Gulliver's Travels, in which all the rather vulgar humor culminates in the rather shocking realization that horses are far superior in moral qualities to men.

Dickens's novels, among other things, are satires. His comic characters, like Mr Pecksniff, Mrs Gamp, even Fagin, are not simply funny; he quite clearly wants the world rid of such people--and he did as much as Marx, I think, in alleviating the problems of industrial England, because he enlisted the emotions in his satire, and Marx enlisted the British Museum.

There is nothing wrong with satire, any more than there is anything wrong with rhetoric. But just as rhetoric is not art, because its purpose is action, not information, so satire is not humor, because its purpose is also action, or at least condemnation of evil practices. Humor as such does not take a stand on what it laughs at; satire does; and that is why humor is good-natured and can be enjoyed even by the people who are the subject of the madness pointed out, while satire is resented, because it supposes that the world is to conform to the satirist's view of what it ought to be.

For instance, I don't happen to share the political views of Gary Trudeau, the artist of the comic strip Doonesbury, which is left-wing satire of everything in government. I happen to think that much of what goes on in government is the very opposite of what the people in government say is going on; but to appreciate Doonesbury, you have to agree with his idea of what the solutions are; otherwise, he is simply sneering. On the other hand, Berkeley Breathed, in the strip Bloom County, often poked fun at the same things; but he poked fun at everything, and though it seemed to me his orientation was probably close to Trudeau's, his humor didn't rankle because it was humor, and he didn't give the impression of being a crusader.

I personally think that artists and humorists have to be very solid in their grasp of reality before they pass over into being rhetoricians and satirists. From what I have seen of both, their views on things are very often simplistic and emotional, with very little in the way of hard facts to back them up; and their solutions often are such that they would only make the problem they are trying to solve worse. Solving problems taking an esthetic approach is probably impossible; because the world doesn't want to behave the way our emotions would like it to behave, and we have to get cold and hard-headed to see how we can take the small steps toward betterment that the world is ready for, not impose the ideal on a recalcitrant earth. Very few satirists are of the caliber of Swift and Dickens; and when the humorists get serious with their humor, they very often just turn out to be nasty people with smiles on their faces.

And with that, I end this unfunny discussion of what is funny.


Notes

1. This goes for tickling also. Even though laughter is the response to being tickled, it is not the laughter involved in enjoyment, but a nervous reaction to the invasion of the body by another. I temper this comment by my experience with my six-year-old grandson, who seems to enjoy and asks to be tickled. I don't understand it, but evidently there is at least one thing in heaven or on earth that is not dreamed of in my philosophy.



Section 7

Values


Chapter 1

Values vs. morals

The study of values is usually called "axiology," and one of the major parts of it is that of the study of what is right and wrong. I think, as you will gather if you have read this far, that this is a mistake. I intend to treat right and wrong in the sixth part of this marathon tome, after going through the various modes of interaction between people. To explain why I am not treating it here, let me refer you back to the section on rightness and wrongness in Chapter 10 of Section 5 of the first part 1.5.10, where I said that they are the objective consistency or inconsistency of an act with the agent performing the act, and have nothing in themselves to do either with the evaluation of the act as good or bad, or even with the knowledge that it is in fact right or wrong.

Evaluation depends, as I said also in Chapter 10 of that section, on ideals or standards we freely set up. I repeated this in Chapter 6 of Section 3 of the third part 3.3.6, under Conclusion 9 and also Conclusion 11, connecting it to choice and goals, where I defined a value to be the aspect of something by which it leads to a freely chosen goal. It is our task here to explore this a little.

BR WP="BR1">

Chapter 2

Goals and values

Things are a trifle complicated, therefore. But they are made more complicated by the fact that we use "values" and what is "valuable" in at least three senses; and so it would be well to make them clear, so that we can eliminate ambiguity as much as possible from what we are talking about.

First of all, when people talk about something like "the value of life," or when they say that "life is a value," they precisely do not mean that life is something that is (a) useful, and may be more or less useful than other things, or (b) that it is something admirable, and may be more or less admirable than, say, honesty or courage. What they mean is that it is something that demands that it be respected and not interfered with.

In that sense, a value is an absolute, and is to yield to nothing else whatever. Values-to-be-respected are in fact rights, and they "supersede" other values in the sense that no value of any other sort can yield to them. You can't justify killing someone in the name of your own health, your own happiness, or even "the greatest happiness of the greatest number" (which is one of the places where utilitarianism comes a cropper); the only thing that could justify your killing someone is that you might perform the act which led to his death to defend yourself or others against a violation of another equally serious right. We will see how this can be done consistently in the fifth part. But no matter how much better it might be for everyone concerned, you cannot kill someone, or deprive him of any right, for any good purpose.

But then if values-to-be-respected, or values in this absolute sense, are in fact always rights, then why don't we call them rights rather than values? Life is something to which people have a right, not something which is of "supreme value." In fact, as I will show later, it is a necessity, not a value at all; because you can't morally choose to stop living bodily (and of course, you can't actually stop living, because your life is eternal). Calling it a "value" risks classifying it with those things that can be weighed in the balance with other values, and those things which are more or less useful to some purpose.

So from now on, I am not going to use "value" in the sense of "value-to-be-respected."

Secondly, we speak of moral values, such as virtues. These are "values-to-be-admired." Thus, honesty is said to be a "value," and so is courage and cleanliness and generosity and so on. These are not exactly absolutes, except in comparison with their opposites. It is obviously immoral to be dishonest, and so dishonesty is to be shunned, however advantageous may be the dishonest act; but you can be more or less honest. For instance, in regard to telling the truth, you must avoid deliberately saying what you know is false, but this does not mean that you can't keep your mouth shut and not actually tell the truth--or that if you tell it, you have to tell "the truth, the whole truth, and nothing but the truth" (unless you have sworn to do so in a law court, of course). Similarly, you can be a little courageous without going so far as to put your life on the line; and this is perfectly legitimate in cases where the latter is not demanded. You can be generous and still keep some of the finer things of life for your own use; you don't have to go so far as to "sell all you have and give the money to the poor."

These, however, are not values, strictly speaking, but moral ideals. They are acts which are objectively consistent with what it means to be a human being (and so are morally right acts); and so the habit of performing them is a moral virtue, and one who has these virtues has trained himself to act consistently with his nature. They are not something useful for acquiring human excellence, but are a spelling out of what that excellence is.

The reason they are ideals is that they carry an "ought" along with them; because you have to have them (to some extent) if you are going to avoid being immoral (choosing to act in a morally wrong way). But they are abstractions, and as such have no limits, and consequently can't in fact be put into practice by anyone. That is, no one can be completely honest, in the sense of never giving anyone the impression that he is anything but what he in fact is. But the virtue of honesty eschews any hint of hypocrisy, deceit, or cheating.

Yet we use them as standards for judging people's conduct. I remember one nun on our Rank and Tenure Committee who wanted to withhold promotion from a faculty member who had knocked himself out teaching, publishing, being on committees and all sorts of things, because she had the impression that he was "really" doing all this because he was ambitious, not because he was a loyal member of the college; and she didn't like the dishonesty she thought was there. Needless to say, I had some pointed remarks to make to her about her view.

Hence, these virtues in their unqualified form are used as standards for evaluation of human conduct; and there is a certain objectivity to them, in that they are the opposites of what is inconsistent behavior. But first of all, insofar as they are standards, they are not values, but simply ideals; they are not "worth" anything in themselves, but are just what the person who judges think are what "real" human beings ought to be.

Furthermore, to what degree someone "ought" to measure up to these standards is a matter for each person. You might think that a person who doesn't tell you his faults is not being honest, because he's not being completely open, while I might be content as long as he doesn't actually lie, and call that honesty.

In point of fact, we have no right to use these standards to judge anyone else's conduct, as we will see in the fifth part, because we have no way of knowing how much the other person knows about the situation or what is called for by the situation, and to what extent his actions actually followed his choice. Hence, even if a person says what is false, and you happen to be aware that the day before he uttered this false statement he knew what the facts were, you don't know whether at the time he uttered it he remembered, or whether, even if he remembered it, he did not blurt out his falsehood without being able to prevent himself.(1) Hence, even if your standard of honesty involves simply not saying the opposite of what is the case, you still can't use that as a way of saying that someone's conduct is immoral. This is what Jesus was getting at when so often he commanded his followers not to evaluate other people's conduct.

But be that as it may, these "values-to-be-admired" are moral standards rather than "values," because they are certainly not means to human "worth," but rather the manifestation of it; and as to human "worth," this does not mean that the person who is virtuous is "more valuable" as a human being than one who is less virtuous, as if the less virtuous person were somehow expendable or to be looked down upon or slighted because he didn't measure up to the other's conduct. A human being is to be respected because he is a human being, not because he "deserves" respect because of his conduct, as if rights were something that you earned, and not something you had by nature, however shabbily you treat your nature. That is, if there were eleven people in a lifeboat that held ten, and one had to be thrown overboard to avoid having everyone drown (which can be not immoral, but let us not discuss that here), to use the fact that one person was more virtuous than another as the criterion of whether he should be kept in the boat, while the less virtuous person had four others back at home who depended on him, and to chuck out the less virtuous person on the grounds that he wasn't "worth as much as a person" would be immoral. (Actually, if the other were really virtuous, he would volunteer to jump overboard in order to avoid having someone else make the choice of who to force over.)

I am not necessarily denying the place virtues and moral standards have in a person's life. What I am saying here is that they shouldn't be confused with values, because such things don't admit of comparisons among themselves, really. Honesty is not "more" of a virtue than courage or generosity; you must have enough of all virtues that you don't deliberately do what is positively immoral; but you need not have any more than this minimum of any virtue; and what human freedom and self-determination precisely means is that within the range of human conduct, we each of us pick out the life style we want to live at, emphasizing some virtues more than others; and no one is to tell us, based on some abstraction, that the life we have picked is either reprehensible or "worse" than some other life style.

That is, "better" and "worse" are not the same as "higher" and "lower," unless one freely chooses to make the highest (least limited) act the "best," and set it up as a goal for one's life (in which case, you should probably be a philosopher, as Aristotle pointed out). What is morally legitimate but lowly (like lifting weights) is not "objectively worse" than what is more spiritual or "higher." And similarly, if a person wants to be honest in the sense of not being a liar without being perfectly open and candid, then he is not "worse" objectively than someone who goes out of his way to make himself absolutely clear.

As I said at the end of the third part, the curse of this world is standards. Have goals, but forget about standards; accept reality for what it is; and if it wants to go beyond itself, help it realize the potential it is trying to realize; but don't look at it in relation to your fantasy about how things "really ought" to be.

Therefore, let us confine values to what "valuable" things have; and, to repeat the definition in Chapter 5 of Section 3 of the third part 3.3.5:

The value of any object or act is that aspect of it by which it can lead to a chosen goal.

In this sense, the economists are right in their notion of the "utility" of values; values are precisely the usefulness of something in bringing you to where you want to be. I will discuss this further in the next section (and the next part) in dealing with economics.

Now if you recall the brief discussion of values under Conclusion 11 of Chapter 6 of Section 3 of the third part 3.3.6, I pointed out there that, though the goals the values lead to are freely chosen and therefore subjective, you can't make something lead to the goal just by wanting it to. Either the object has the ability to get you where you want or it doesn't.

Hence, the value is something objective. In spite of the fact that it is relative to something which is subjectively adopted as a goal, it in fact leads toward it whether we think it does or not. Many people don't see the value in philosophy, for instance, because they don't understand how it can help them to fulfill themselves. But in fact, as Socrates pointed out, "An unexamined life is not worth living," and philosophy can help a person assess more clearly what his goals are and what in fact leads to them, so that he won't inadvertently be at cross-purposes with himself. Philosophy has this value, irrespective of whether a person realizes it; and examining your goals and the reality of the world is a value toward being happy, whether you know it or not.

We can, then, restate Conclusion 11 of Chapter 6 of Section 3 of the third part 3.3.6 a little differently as our first conclusion of this chapter.

Conclusion 1: Values are objective, but personal.

The value is an objective property some object has; but whether this property is a value to a given person depends on whether the person has as his goal what it leads to. Thus, a symphony ticket may be a value to me and not a value at all to you, because you don't have listening to classical music one of your goals; by the same token, a ticket to a football game may be a value to you, but it has no value to me, because I am not interested in watching football. The point is that in either case, the ticket will get you in to the concert or the game, and without it you can't get in; and so it has the property of enabling the particular act in question. Where the "personal" aspect comes into play is whether that act is one of your goals or not.

As a kind of corollary of this conclusion, we can draw another:

Conclusion 2: A person does not "choose" or "develop" a value system. He chooses a set of goals, and these automatically carry with them the system of values implied in getting there.

The values are implicit in the goals; but in choosing the goals, you do not know what the values are that will get you to them. Hence, you must study the world and find out what objects in fact lead you to where you want to be, and at the same time don't lead you away from some other goal of yours--or lead you also into some inconsistency with yourself. For instance, it might be that you could increase your income and buy that car you wanted if you embezzled some money from your company. That might be the most efficient way of achieving this particular goal, and it might also be that you would be very unlikely to get caught. Hence, embezzlement is a value leading to the goal you want to achieve.

The trouble, of course, is that it also is an act that is inconsistent with you as pretending that something which is not yours is yours; and, as we saw in Chapters 3 and 4 of Section 4 of the third part 3.4.3 3.4.4, this means eternal frustration along with achieving the goal of getting the car you want. Hence, even if taking this life alone into account, it is a value to embezzle, still, taking the whole of life into account, you will be worse off for doing it than you are now; and hence, it is going to lead you away from where you want to be, taking everything into account. (Of course, since you're free, you could say, "I would rather be frustrated in any other aspect of my life, even eternally, in order to have that car," in which case, the embezzlement would be a value again.)

The point is that those who are immoral are not people who "don't have any values." They certainly do, and are very often much more aware of what their values are than those who are honest; it's just that either they are looking only to this life, or they don't care about what happens to them eternally, and have their eyes focused on very narrow goals instead of their lives as a whole, in which case what are disvalues for the honest people are values for them. Everyone has a set of values, because we can't go through life without making choices, and choices imply goals and the means to get to them--and these are values.

But the whole trend nowadays of decrying the "lack of teaching of values," and proposing to give a "value-centered education" to correct the decline in morals in our society is misguided, especially when the whole project involves "values clarification" and doesn't make any statements about what is right or wrong but about the person's being clear about "the kind of person he wants to be." I have nothing against this; but it's no way to cure moral decline, especially in public schools, where the one thing that can make it to your advantage to be moral--a life after death--can't be mentioned.

A person who comes to college so that he can take business administration and get a better job generally has a very clearly defined set of values; he knows what he wants, and he knows how to get there. He sees pretty clearly the kind of person he wants to be: a modern-day Babbitt; and if others want to make him look "culchured," then their cringing over his crassness is just their tough luck, as long as he can buy and sell them ten times over.

There's nothing wrong with his values, as long as he's moral. So what he needs is not "values clarification," he needs to be taught a course in ethics, in which it is made very obvious that there is in fact a hell, so that instead of just being uncultured, he doesn't become another Sammy Glick in What Makes Sammy Run. What that well-intentioned book didn't tell you, as it left Sammy alone at the top after stepping on so many faces to get there, was that there are lots and lots of the people Sammy stepped on who are lonely too, and far from the top--and there are lots and lots of Sammys who have lots and lots of friends, because it's lots and lots easier to have friends when you've got money. If there's no hell, then the people who decry the moral decline in our society are either fools or jealous.

Value-centered education as practiced today is another one of those pious lies, like the one told about George Washington's chopping down the cherry tree and then answering his father, "I cannot tell a lie; I did it with my little hatchet." That never happened; it was made up to teach kids not to lie. The reverend who perpetrated this fraud on children had values, because he knew that the best way to make them behave was to show someone they admired doing the things that were desirable; but I find it difficult to enter into his moral frame of mind if, to achieve such a noble goal, he would do the very thing he was teaching children to avoid.

All this shouldn't be taken to imply that I think that what are called "positive role models" shouldn't be held up before children and others, so that they can have an idea of what really human goals are, see that they are achievable and that those who achieve them are happy, and that they themselves can be happy pursuing this route rather than imitating the pimp in the pink BMW. By all means, give them examples of virtuous people to look to and imitate, and show virtue for what it is, not hypocritical sanctimony. But this is different from teaching morals, and indicating why you had better not be immoral if you know what the whole of life is, and based on that, what side your bread is buttered on. And certainly when people get to college, role models take a back seat to reason; and reversing the value system that is very rational in this life takes more than questioning "What kind of person do you want to be?"

Put it this way: value clarification isn't a value if what you want to improve is a person's morals.

With that said, let us look a bit at goals, since we obviously have rather large numbers of them, generally speaking. If they are subjectively chosen, how do we go about choosing them, and more significantly, how do we rank them, so that we can tell which ones to spend more effort pursuing?

I mentioned under Conclusion 9 of Chapter 6 of Section 3 of the third part 3.3.6 that importance was the name given to the relative position of goals with respect to each other; and that importance itself, like the goals, was also subjective. Let me spell this out a bit.

One goal is more important than another if the other will be given up or postponed in order to achieve it.

Thus, we have our ranking by being faced with alternatives, in which one goal has to be given up in order to achieve another. This can, of course, happen either in fact or in imagination. For instance, if you have only twenty-five dollars of "entertainment money," and you are faced with buying a ticket for the symphony or eating a restaurant dinner, then you are forced to choose which is more important for you, because you can't have both.

There is nothing objective which can help you choose, supposing both to cost the whole twenty-five dollars. Then how do you do it? Arbitrarily. You may give reasons for your choice, as, for example, that hearing the symphony will "nourish your spirit," while eating the dinner panders to the "flesh"; but another person could say that letting his ear drums be rattled by air vibrations just to see connections among the emotions twanged by them is pretty stupid in comparison to feeding his body and at the same time noticing esthetic connections between the emotions connected with what affects his taste buds. Yes, you skeptics, there is an art to dining; and it is very like music.

Ultimately, what is more important is what "fits" better your ideal of the "real true you" which you have been gradually constructing over the years; and this is self-created and is not imposed on any of us by the facts.

Conclusion 3: Importance is subjective, not objective. Nothing is objectively important.

"Now wait a minute!" you say. "You can't mean that staying alive is not objectively more important than hearing a concert!" Oh, yes I can. You're confusing what is essential with what is important, and assuming that essential acts are the "most important" of all, as if staying alive was a goal we have, and the primary and overriding goal of our lives. But, as we will see shortly, things like staying alive and not being maimed and being healthy and being able to breathe breathable air are in an entirely different category from goals; we don't choose these things and strive after them, we presuppose them and work from them. These are not freely chosen ideas of what I want my individual life to be; they are the minimum for any human being to be able to live at all; and the minimum is clearly not a goal.

As I say, I will discuss this later; but for now, take my word for it at least tentatively, and consider that goals deal with your personal, freely chosen life style, and in that case, since you have chosen it, importance (i.e., what comes closer to being the core of that lifestyle for you--what you consider is "most yourself") is up to you, and there are no facts to force you to consider some aspect of yourself as "more to be developed" than some other facet--always supposing that you don't develop one facet in such a way that you contradict yourself in some other respect (which would be immoral).

This is a very radical statement, I realize, especially for someone who holds that morals are objective. But it is true nonetheless. For millennia people have been trying without success to discover what "the good really is," and what is "really important" in life, only to have other people flatly disagree with them--and people who, very often, have tried out the life style in question and found it wanting. I happen to have been, as I mentioned, a Jesuit--I suppose you would call it monk--and found the life very beautiful, even though, because of my peculiar personality, I was not suited to it, and it was thought better that I should leave. I was taught that it was the "life of perfection," and that those who were called to it were the luckiest people in the world; and in many ways, I would go along with this. But I know many other people who have been there and left who think that it is anything but a desirable life, and look back on their years in the seminary with contempt as something wasted. If importance is objective, then they are idiots. But they certainly aren't, many of them, idiots in other respects; and many have not abandoned their Christian beliefs either.

There is also the fact that people can give enormous importance to what just about everyone else calls insignificant and trivial. They tell the stories of people imprisoned in solitary confinement, who spend their days like Doctor Manette making shoes until being deprived of the leather and tools makes their whole life fall apart; or who pass their time walking back and forth in their cell, counting steps and pretending they are walking from Boston to Los Angeles, and imagining where they have got to--and who resent being taken out for questioning, because it puts them a whole day behind in the journey that has taken over their life.

And then there are the stamp-collectors who are all but willing to kill to find the one stamp that will fill up the gap in the collection, or the bird-watchers who endure cold and colds to catch a glimpse of the rose-breasted grosbeak, or the fishers whose idea of perfection is to spend a whole day sitting in absolute silence with a rod sticking from their hands, waiting for the pike to think that the bait is actually food. Or the football players, who think that nothing can compare with bodies crunching up against each other; and if a few of your bones are crunched in the process, so what? Or even the politicians, who think that the world turns on the windy debates they have with other politicians as they fiddle while Rome burns.

The importance any person gives to any activity is simply silly to a person who has a different set of priorities; and what this should have indicated to thinkers is that priorities are subjective, not objective, not that everybody but philosophers are cretins. If anybody is almost universally laughed at for having screwed-up priorities, it is the ivory-tower philosophers, who can get excited over whether existence is or is not really distinct from essence, or whether (as one philosophy professor I heard recently snidely remarked) "entity" is itself an entity or not.

It doesn't follow that what is higher or more spiritual "ought" to be more important than what is a more limited type of activity; and this cannot be stressed too much, since we have had thousands of years of people's believing just the opposite. Studying philosophy is more important than running a bank successfully only for the person whose goal is achieving the greatest development of his own personal intellectual capacity, rather than for one whose goal is to see to it that people have a safe and profitable place to keep their money, and who can borrow reasonably what they need. I hasten to add that "being useful to others" is also not objectively more important than seeing to your own personal development--because in the final analysis, making your own actions over into a value for others means that you are subordinating your fulfillment to their own subjectively created idea of their own fulfillment; and what is "objectively more important" about your giving up your own goals so that others don't have to? I'm not decrying any of these; merely pointing out that there is nothing objective that would single one out over the other as what we ought to take as our goal. We are free, and as long as our goals are not self-contradictory, then we can pick any set we want and rank them any way we want.

Having ranked goals, then, how do we rank values?

One object or act is more valuable than another if it leads to a more important goal.

That should have been pretty obvious. At one time, I thought that there were two criteria for a greater value: that the goal is more important, or that it leads more efficiently to the same goal. But the greater efficiency is only more important if you want to achieve the goal and get on with other things; and so greater efficiency is a value depending on whether achieving the goal sooner is also a goal. It might be that a person would take a longer time getting a degree (by studying part time instead of full time, for instance) because he would rather have the extra time to work, or simply because he likes the college atmosphere and is in no hurry to leave it. Hence, the only thing I can see that makes one object more valuable than another is that the goal it leads to is more important than the goal of the other value.

Notice that, though values themselves are objective, in that they do in fact lead to the goal whether we think they do or not, the relation of values to each other as greater or less is not objective, because that relation depends on the importance of the goals, which is subjective. This is significant enough to make a formal conclusion of it:



Conclusion 4: No object or act is objectively more valuable than any other object or act.

This will figure very heavily in the discussion of economics, which is to follow this section. It is almost universally assumed that there is a "real value" for an object, and if you happen to be able to buy it for a price below this value, then you've either made a shrewd bargain or cheated the seller, the way the colonists bought Manhattan Island from the Indians for a few colored beads.

There was no cheating going on. Though the beads were abundant in Europe, they were unique among the Indians; and just as people have given up fabulous sums for things like the Kohinoor Diamond (a lump of carbon), or a painting by Van Gogh (a piece of canvas), why shouldn't the Indians, who had the whole of America to roam around in, part with this island for something comparable--especially when they were mere visitors to the island themselves?

In fact, one of the fallacies in making a science of economics is the assumption that, if large numbers of people happen to agree (at the moment) that X is more valuable than Y, then this momentary consensus confers a certain objectivity on the value of X with respect to Y. Unfortunately, however, people can shift their priorities (what they consider important) with blinding speed, and what was very valuable to large numbers yesterday becomes worthless today. Who buys hats any more? When I was a child, a man had to have at least one, and women had to have dozens; then came hair spray. Thus, economics, for all its indifference curves and use of the calculus, can't in fact be used to predict things; because prediction in the realm of what's more or less valuable (and hence what the market price of things will be) is an exercise in mob psychology, not in finding out what the "real" value of something is. And forecasts by economists bear me out; if weather forecasters had the same record of accuracy as economists, they'd be on comedy shows, not the nightly news.

This is enough, I think, to show that the position I have taken based on an analysis of how we think in terms of values and goals is empirically verifiable. You would expect economics to be a very soft science indeed if value-ranking is purely personal, and if it is just coincidence or the desire people have not to be different from others that makes one person's ranking of values more or less the same as someone else's. Ten tons of subjectivity do not make one ounce of objectivity.

Each of us, then, has his own value system, based on the relative importance of the goals we have, which in turn is based on the subjectively created ideal we have of the "real true self." It follows from this:

Conclusion 5: It is morally wrong for one adult to force another to act in conformity with the forcer's value system.

The essence of being human, really is that, within the limits of our genetic potential (our basic human nature), we can make ourselves be whatever we want to be; and this means in practice that our goals in life and their relative importance must be left up to each one to choose for himself.

Now of course, you can't force a person to choose, because he's free, and it's a purely mental act, so that you would never even know what his choice was unless he told you. But you can force him not to be able to carry out his choice. And this is what is meant by "forcing him to act in conformity with your value system." If he wants to play baseball and you want him to be an engineer, and you cut off his income or send goons to rough him up if he goes near a baseball field or puts a glove on his hand, then you are saying that your idea of what he is is what is to prevail over his; and so you are human and he isn't. Unless you can show that in fact he is doing something that he doesn't realize--that he's not compos mentis, and the choice he's making isn't what he thinks it is (as if he has been brainwashed by some cult, as seems to have happened recently in some cases), then he can make of himself whatever he pleases, as long as it doesn't interfere with anyone else's rights.

The only time you can morally restrict the activities of a non-insane adult is when this is the effect of an act which defends someone else against a violation of a right involving equal or greater damage to that other person. Thus, you can force a thief to work to make restitution for what he has stolen from others, or you can put him in prison to defend society against him--or even kill him if that is an act of defense of people's lives; because, as we will see in the chapter on ethics, the violation of the person's right in these circumstances is an unchosen side-effect of the choice to protect the others from damage. But you can never choose to impose your value-system on him; this is to make him your slave, when he is free.

Children and the mentally incompetent do not fall under this restriction, however, because (and insofar as) they do not understand the relation between their acts and their real effects as opposed to the effects they intend to have. Children do not see this, first of all because they lack experience in knowing what effects acts have, and secondly because they think abstractly, and believe that by prescinding from unpleasant consequences, they won't happen.

Children, then, have to be taught to make informed choices, and cannot be allowed to choose things based on their blind and abstract view of what is entailed in the choice, because this sort of thing makes them unfit for what they are going to be faced with in the adult world, and they will probably ruin their lives in the quest for fulfillment. Furthermore, since children do not have a clear idea of what their possibilities are, they must be exposed to various (of course moral) life styles so that they can realize that they are possible for them, and that if they should choose one of them in the future, they will not be cut off from pursuing it because of inadequate preparation.

Conclusion 6: Children and mentally incompetent adults must be forced to live according to a value system that is not their own at the moment.

The purpose of the forcing in the case of children, as I mentioned just above is (a) to teach them the concrete consequences of choices that they make, (b) to show them the potential they in fact have for various different life styles, so that when the time comes they can know where their talents and interests lie, and (c) to give them a preparation for any legitimate life style so that if they choose it when the time comes (even if it is not the one most consistent with their native abilities), they will be in a position where they can pursue it.

When does "the time come"? When the child is capable of realizing what is entailed in a choice, and how it in fact will affect his future in this world and his eternal life, and when he is prepared to begin serious work toward developing himself toward a place in society. That is, when society can begin expecting things from him, and is not solely concerned with doing things for him. Since our society is becoming more and more complex, the actual age at which this being on one's own where one cannot be forced any longer to act according to alien value-systems is later and later; it is generally somewhere around age twenty now, I would say.

Of course, there is not an abrupt transition from childhood to adulthood, really. From their teens, many children are working, where they are expected to do things irrespective of the personal development that comes from them; and certainly, by their teens, most children are pretty capable of realizing that acts have automatic consequences and our choices do not control our futures in this world in an absolute sense. As children grow older, they should be given more and more control over more and more significant aspects of their lives; and, for instance, by the time they enter college, their parents should not be the ones who decide on what their major is to be, or what career they are to be headed for. It is hard for a parent when the student picks something like drama for a major, because it is so obvious to the parent that, however talented he may be, "making it" in this field is like playing the lottery; but, having pointed this out to the child, it is up to the child to make up his own mind.

Persuasion, then, even toward other adults, is perfectly legitimate; but it should be done with respect, recognizing that the other need not have the same idea of relative importance that you have, and simply informing the other of whatever reasons you have for seeing some things as more desirable than others. But trying to prevent an adult from doing what he sees is desirable is to dehumanize him, as I said.

Now of course, those who are not mentally competent are also people who have to be forced to live according to someone else's value system, because they are in the position of being permanent children. They should have as much control over as many aspects of their lives as they can handle, but should not be allowed to make major decisions on their own, precisely because they make them in an fairy-tale world, and not in the world that actually exists.

In both of these cases, each person must be handled individually, because some children are more mature than others, and some retarded or insane people are more competent than others. The point I am making is that the fact that they are free beings and therefore capable of making free choices does not mean that they should be left alone, because they are making uninformed choices and the act they choose often contradicts their intentions (or even contradicts the rights of others that they don't see).


Notes

1. Which is not to say that you might not have a pretty good idea of the probabilities, if you know the circumstances in which the act was performed; and it is legitimate to protect yourself from similar acts by him in the future. It would be supremely imprudent to entrust your child to someone who has had credible allegations against him of child molestation. But this does not mean that you know that he was immoral; that is between him and God.



Chapter 3

Essential acts and necessities

I will get to what might be called "potential values" (things that could be values) and a classification of different kinds of values later; but now I want to clear up something that I mentioned at the beginning of the discussion on importance.

Let me begin with a few definitions:

An essential act is one without which a human being cannot be human.

An absolutely essential act is one which, if not performed, results in death.

A relatively essential act is one by which, if it is unable to be performed, the person is dehumanized.

A relatively essential act is more essential if the dehumanization implied in its deprivation is greater.

Dehumanization is being forced to do less than what is implied in one's human genetic potential as human.

A necessity is a means toward an essential act.

An absolute necessity is that without which a person dies.

A relative necessity is that without which a person is dehumanized.

A relative necessity is a greater necessity if it leads to a more essential act.

There is a certain parallelism, as you can see, between essential acts and goals and necessities and values; but there are significant differences. There is no such thing as an "absolute" goal, because all goals are freely chosen, and so the lack of a given goal won't destroy you. But if you can't breathe, for instance, you die.

But before going further into the distinction between goals and essential acts and values and necessities, let me make clear what I mean by "dehumanization," since it is a word that is tossed around pretty freely nowadays; some people even think that if there is any disparity in income, the ones on the short end are dehumanized--and this, I hasten to say, is just not true.

When the unifying energy of the body builds the body, based on the pattern in the human genetic structure, it builds certain organs which have definite functions in relation to the whole. These, of course, are our faculties. The acts we can perform because we have these faculties are our genetic potential.

But the genetic structure of the initial cell is not only the pattern for the faculties we have in common with the rest of mankind, it also determines individual differences like height, metabolic rate, musculature, and so on. The individual differences insofar as they are based on our genes, are our individual genetic potential; but the ability we have to act that is common to human beings as such is the human genetic potential.

In either case, since we are free and these are faculties, we can develop them to a greater or lesser extent--or even, supposing there to be no contradiction involved, choose not to develop them at all, as when a person chooses to be celibate, even though the ability to reproduce is part of the human genetic potential. But this freedom is not the issue here; every exercise of freedom in a social context restricts to some extent others' freedom to develop themselves, and so if it were immoral to prevent any development of another, we could not act at all.

The problem comes in preventing someone from doing what his human genetic potential allows him to do. If you happen to have special innate ability as a pianist or basketball player, you are not being dehumanized if you are prevented from taking piano lessons or participating in basketball. I remember one student at Xavier University I met and asked whether he was still scoring as well as usual. His face became as long as his body as he answered, "I'm academically ineligible this semester." He was not able to realize (for that semester) his genetic potential in basketball; but this was not dehumanization--far from it, in my opinion, given the grounds for his ineligibility.

The reason being unable to fulfill your individual genetic potential is not dehumanization is that this sort of fulfillment deals with not just being human but being the special example of humanity that you choose to be. Hence, this is precisely the realm of self-creativity (in spite of the fact that your body makes certain acts easier than others), and isn't essential to your being human. I talked about this in Chapter 4 of Section 4 of the third part 3.4.4, in discussing what life is all about, where I mentioned that the talents we have been given have no imperative connected with them that would make us choose as a goal the life style that they make easy. If you have potential as a basketball player, and you want to do something else with your life, this is up to you; it is just that the talent will give you an edge over normal people if you choose this life style.

But there is a certain minimal development of our abilities that we cannot, generally speaking, avoid choosing without doing positive damage to ourselves; and this is what is meant by the "human genetic potential" as opposed to the individual one. If you refuse to eat, for instance, or refuse to eat a balanced diet, you make yourself sick and cannot do what any normal human being can be expected to do just because he is human; if you put out your eyes and cannot see, you cannot do what a human being can do because he is human.

And if anyone else forces you into a situation like this, he is dehumanizing you.

Conclusion 7: Where depriving a person of being able to do what he is capable of doing becomes dehumanization is where the act prevented is one which any human being could be expected to be able to do just because he is human.

This is still rather fuzzy, because in practice we get the notion of what "human beings can do as human" from what for practical purposes everybody we observe actually does. Hume made much of the fact that we know "human nature" from observation of actual acts of human beings, and that this didn't give us an absolute grasp on it; but he concluded from this that there's no such thing.

But that is silly (as even Hume in practice held, since he said that reason cannot motivate the will based on his--faulty--analysis of what the structure of the human being was). If a person can't see at all, what sense does his having eyes make? If, like my father, his seeing is so fuzzy that he can't recognize people or read anything but a newspaper headline six inches in front of his face, (and consequently has to read through his fingers) then isn't he also blind? The fact that we can't answer the question of when this relative blindness becomes less-than-perfect vision shouldn't blind our minds to the fact that there's a division there somewhere. And the same goes for any other human trait.

Of course, the individual genetic potential is not the formation of some special organ, but only the greater-than-normal strength of some organ like the ones everyone possesses. Hence, there's not going to be an absolute, cut-and-dried distinction between what is hypothetically "essential" if you want to be the distinctive human being you have chosen to be and the essential acts in the sense of those whose deprivation makes you less than human.

What I am saying is that dehumanization occurs at the level of the minimum that can be expected of any human being. And we find this minimum by observing what "practically everybody" can do, and setting this as our "zero" for humanity, saying that below this level, the person is so limited that he is a kind of less-than-human human being. Just as below the freezing point of water, things are considered cold by just about everybody, it makes sense (as the Celsius scale does) to put the zero for heat at this temperature, and consider everything below it as "negative heat" or "coldness," even though, from there down to zero on the Kelvin scale, there ontologically is still heat (molecular motion). We saw this in discussing the problem of evil in Chapter 12 of Section 5 of the first part 1.5.12. To say that there's no such thing as coldness because the Kelvin scale doesn't admit of negatives (zero is molecular rest, and there's no anti-motion) is just to be silly. Relative terms have meaning, even though, obviously, not absolute meaning.

But in practice the fact that this positioning of the "zero" for human ability to act, making the dividing line between the human genetic potential and the individual genetic potential (talent) is not objectively fixed (and is not even in principle objectively fixable) means that the point at which dehumanization occurs will vary from era to era and culture to culture.

That is, what "practically everybody" can do in the United States today is something that kings couldn't do as little as a hundred fifty years ago. I once drove with my son to Texas, a trip of twelve hundred miles, which we did in two days, in comfort though the air outside the car was over a hundred degrees Fahrenheit and with enough quiet so that we could converse in normal tones with each other; not to mention that we had the world's greatest orchestras at our disposal whenever we didn't want to talk. It was considered enormous hardship that I chose to return on the bus, spending thirty consecutive hours to get back to Cincinnati. Louis XIV should have had it so good. If he wanted to go from Paris to Versailles, it would have taken him almost as long, and the jolting and discomfort would be something no human being nowadays should be forced to endure.

There are certain things, however, which, no matter what age you are in, can be called dehumanization. Blinding or crippling a person is obviously to dehumanize him, irrespective of the culture he lives in; so this zero below which dehumanization occurs is not infinitely flexible, and cannot just be set anywhere short of death.

But above this "relatively absolute" zero which is cross-cultural(1), there is the zero which depends on the culture's development, where for practical purposes everyone in the culture can do something, and consequently forcing someone not to be able to do it is dehumanizing him. For instance, "everyone" in our culture can have a television set if he wants one; and so if a person is so poor that he can't afford even a second-hand television, then he's living a less-than-human life; while in India or Bangladesh, say, having your own television set is a luxury that relatively few enjoy. Hence, depriving a person of one in those countries is not dehumanization, because this wouldn't be preventing him from doing what "for practical purposes everyone" can do, and what he could be expected to do because he is a human being.

A further distinction must be made here, however. It would be strange indeed to say that watching television is an essential act, in the sense that if you don't do it, you are less than human in the present-day United States. If anything, given the quality of programming, it would be the other way round. Still, a person who is so poor that he couldn't watch television if he wanted to is, it would seem to me, in a dehumanized condition.

What is the solution here? Since we are human beings who can set goals for themselves, it follows that it is dehumanizing if a person has no flexibility in choosing what to do, and must spend all his time simply surviving.

Thus, a certain minimum of what is not essential is essential to being human, because otherwise the human being is not in practice free. Hence, while the actual act itself, like watching television, is not essential, it is essential to have a certain minimal set of such acts to be able in practice to choose among, or you have no room to exercise your freedom. You may choose not to have a television set, but may spend your time in the park instead, or reading a book, or whatever, depending on your idea of what yourself is; and, depending on the culture, a greater or lesser number of these options must be available to you or you are dehumanized.

Conclusion 8: It is essential for a human being as free that a certain number of non-essential options be available to him to choose among.

Dehumanization, in other words, is another name for harm or damage. The person who has no options at all beyond bare survival is damaged in his freedom because he is a free person who cannot morally exercise his freedom. Hence, a person who is by circumstances or human agency dehumanized is in that self-contradictory position I talked about in Chapter 12 of Section 5 of the first part 1.5.12 in discussing the problem of evil. It is not, as I said, an actual contradiction, because it depends on our standards, which we set up (in this case, with the justification that "practically everyone" can do such-and-such); but, relative to our standards, it is a contradiction. And since each of us is human and therefore does have a human genetic potential in common with others, we have a moral obligation not to force anyone to live below the minimum implied in this human genetic potential, even though it is not possible to fix this absolutely and perfectly accurately--especially since there must be this flexibility in allowing a choice among a certain number of acts which are not in themselves essential.

The culture defines how large this minimal set of options is to be in economic terms.

The poverty level of a given culture is that level of financial resources such that below it the person does not have the minimum ability to choose that "everyone" in the culture has.

We will see in the next section in discussing economics how money is a certain quantification of the ability to act; but since money doesn't pick out which act you are to perform, it is also a quantification of freedom to act. And as we were just saying, a certain minimum level of freedom is essential for human life; and the poverty level in a given culture defines the zero for this aspect of humanity.

Now then, it is pretty obvious (to me, anyway) that no harm was done the young basketball player by keeping him away from the sport for a semester so that he could pass his courses; even though it disappointed him and perhaps even angered him. It might not even be doing to harm to him if he were not permitted to play basketball at all, since he has no right to be a basketball player specifically. Depriving a person of a given one of the non-essential options is legitimate, because if it weren't, then the non-essential option would be essential, which is a contradiction. It is that there must be some set of non-essentials open to a person.

But this set of non-essential options that is essential for people to have does not have to include the particular act that the person "really wants" for himself, because that would imply that a person has a right (can't be prevented) from realizing his choice in this life. But since people's choices often are in conflict (as witness all the candidates who choose to be President in any given election), this would be impossible.

Hence, it is not essential in this life that we realize our goals in order to be free and to set them; what is essential is that we not be put into a position that we have no room to maneuver in this life at all. In fact, one of the effects of not being able to do anything but survive is that a person doesn't set goals for himself, because he realizes that it is futile to do so from what he can see of his life realistically. Given some room actually to choose what he wants to be, then he can recognize what it means to be human, and it is more possible for him to set goals that might not be able to be realized in this life, but will be fulfilled after he dies.

This is a tricky area to sort out in practice; but it seems to me that the basics are true. We don't have a right to become what we want to be, nor do we have a right to "equal opportunity with everyone else" to become what we want to be--because no damage to us as human is done by depriving us of these, notwithstanding what our society happens to think. I will discuss this in terms of rights in the next section. Nevertheless, we have a right to what is essential to us as human, because otherwise we are human beings who can't do what is minimally human.

And here is where we get into the real distinction between essential acts and important ones. There are five fundamental differences between them.

First, we have a right to be able to perform essential acts; we have no right to be able to achieve our goals.

Rights, as we will see later, are based on our self-determination as persons; but the claim of a given right is based on being able to show that a contradiction in one's present being occurs by being forced not to do the act. It is not exactly the same as dehumanization, because, for example, my driver's license gives me the right to drive a car in Ohio; and clearly this is not something I have because of the genetic potential I have in common with every other human being. But the agreement I made with the State of Ohio is contradicted (violated) if I have fulfilled my part of the bargain and Ohio refuses to let me drive.

We will get into this thorny question later, as I said. But since any dehumanization is a contradiction of one's human genetic potential, then, even though not all rights are human rights, every case of dehumanization is a violation of a right (the right we have precisely as human). But essential acts define what is and is not dehumanization; therefore we have a right to perform all essential acts.

But we have no right to be allowed to be the kind of being we want or choose to be. The fact that someone wants to be an actor does not mean that some theater or studio has to hire him; whereas if a person is dying of thirst, to refuse to give him water (supposing that you aren't dehumanizing yourself by doing so) is in effect to kill him.

The point, then, is that when we are talking about essential acts, not to do something positive to enable those acts, supposing that it is in your power to do so is to connive in the dehumanization of the person, and is the same thing, in other words, as actively to injure the other person. And this is morally wrong, as we will see when we discuss ethics.

Clearly, the more essential the act the person is deprived of, the more serious the damage done to him. Depriving a person of breathing kills him; depriving him of sight is not that serious, but is very serious in comparison, say, with depriving him of a television set.

No exact quantitative measure can be put on seriousness of damage done, which is one of the things that makes lawyers rich and manufacturers, among others, nervous. It is the subjective standards of the jury, as things in our country now exist, which determines degree of damage; and a clever lawyer can work on their emotions so that the degree of compensation can be bizarre to most normal people.

I am not proposing any solution to this problem, if indeed it is a problem, and if it has a solution; the point I am making here is that (a) damage can be done from preventing essential acts as well as by some kind of attack on a person, (b) the seriousness of the damage depends on how essential is the act that the person is prevented from doing, and (c) that there is no objective criterion for determining how essential the acts are and therefore how serious the damage is.

There is, or there should be, some set of community standards (analogous to market price in the case of economic values) for assessing when damage is done and how serious the damage is; and if we can do pretty well in the market with quantifying what is in itself not quantified, then there should be some way to get a fairly good consensus on a rough-and-ready quantification of damage done upon a person.

In any case, the second difference between essential acts and goals is that a person may not morally choose to deprive himself of an essential act, except only to avoid depriving himself of something more or at least equally essential. A person may, however, give up any goal he wishes.

Since goals are freely chosen to begin with, they can just as freely be given up, either to pursue other goals, to do something which is essential, or simply because we find we are not interested in them any more. But essential acts are not like this; when we give them up, we are dehumanizing ourselves, or doing damage to ourselves. Just as depriving another person of food and water is to kill him, so to refuse to eat or drink is to choose to die, and, as we will see in discussing ethics, this contradicts our nature as living. Similarly, to refuse to eat a balanced diet, so that you become sick, is to choose to put yourself in a position where as human you can do certain acts which you can't do because of your neglect of your body. That is, it is one thing to refuse to do an act which you have the power to do; it is another thing to deprive yourself of the power to do it. You didn't give yourself the power; and so removing it is a self-contradictory exercise of your freedom.

Now this sort of thing can be legitimate if, using the Principle of the Double Effect (to be discussed when we discuss ethics), the deprivation of the power or the essential act is an unchosen side-effect of doing something that prevents an equal or more serious deprivation. The point here is that in circumstances when this sort of thing is legitimate, you have no way out that doesn't involve damage of some sort, because the act which attempts the avoidance of one type of damage has as its effect the other type of damage. Hence, what you are doing here is choosing away from the greater damage, not actually choosing the damage done, because no matter what you do, there's no way to avoid damage altogether.

So, for instance, you might have to have your arm amputated to get rid of gangrene, which will kill you. You are depriving yourself of your ability to pick up things; but the alternative is to die; so if you don't cut your arm off, you are in effect choosing to kill yourself. Obviously an absolutely essential act is greater than a relatively essential one; and so morally speaking you would have to choose the amputation.

Let us draw a conclusion here:

Conclusion 9: It is immoral to deprive oneself of any essential act, however small, for the sake of achieving any goal, however important.

No matter how important the goal may be to you, it is still something you have freely chosen and may freely give up; but every essential act is out of the realm of your free choice, because its deprivation involves you in self-contradiction; hence, none of them may be given up for any goal. Another way of saying this is that you cannot morally do damage, even the smallest damage, to yourself to achieve any goal, however important it may be. The end does not justify the means. The reason, as we saw briefly in Chapter 4 of Section 4 of the third part 3.4.4 and will see again in discussing ethics, is that choosing to contradict yourself implies a self-contradictory goal, and therefore some degree of frustration; but this frustration is eternal, and therefore cannot be compared with the temporal achievement of what you have gained by it (which would be fulfilled eternally if you had it as a goal but could not achieve it here without being immoral).

The third difference between essential acts and goals is connected with what was just said: it is that essential acts are not important; goals are important.

That is, essential acts are not in the same category as goals at all. In the first place, essential acts are presupposed, not purposes to be striven for; every human being, simply as human, can take for granted that he has the human abilities, like being able to breathe, that come with simply being human. What we pick as goals, however are precisely not something that we are bound to have just because we are human, but something that is distinctively our own, making us this human being rather than some other. This latter is what is important to us; the former is just a given. So if we can do the essential acts, we (rightly) do not consider this as of any consequence; it is as if hydrogen would be happy about the fact that it has the spectrum it has. We have the essential acts just by nature, by what we are; we don't "deserve" them as if we had to work to "earn" them; they are the beginning, not the end of human life, as goals are. So for those who can do essential acts, their importance is zero in comparison with goals.

But secondly, when we can't do essential acts, then we must, as I said, give up all goals in order to be able to do them, because no goal or set of goals can be chosen at the expense of performing any essential act; to do so is, as I said, to choose eternal frustration. Hence, if we are deprived of essential acts, their "importance" is infinite with respect to goals.

This is not to say that essential acts are the "most important of all." That would be to put them in same category as goals, and assume that they have a ranking in importance along with goals, except that they happen to be at the top. But this is not true, as I just got through saying, because in that case, we couldn't take them for granted, as we do, and as we legitimately do. To call essential acts "supremely important" makes them ends, not the starting-point we build from, and this refuses to recognize the reality of the situation. Furthermore, goals are given up for goals of greater importance, which means goals that lead me closer to my ideal self, or which increase my fulfillment. Essential acts are given up to avoid losing more essential acts, which means to avoid a decrease in my reality below the human zero. So the reason for giving one up in order to have another is exactly the opposite in the two categories of acts; and, as we saw in the discussion of the preceding point, there is no crossing of the categories in the direction of giving up essential acts to achieve goals; but the categories must be crossed when it is a question of giving up goals to keep or obtain essential acts.

This is another way of saying that importance is meaningless in relation to essential acts. They are neither "very important" nor "unimportant"; they are either more important than very important or less important than very unimportant--which is what I meant when I said that their "importance" is either infinity or zero. And since their ranking is exactly the opposite (increase in one case, greater deprivation in the other) then they should not be talked about as in any real sense the same, however much they might be some sort of mirror image of each other.

Conclusion 10: Essential acts and goals must not be classified with each other; they are in completely separate categories. Essential acts are essential, not important.

It is a failure to make this distinction (and a failure to recognize the corresponding one to be noted below with respect to necessities and values) that has caused much of the confusion in both Marxist and capitalistic economic theories, and has caused in each case much hardship.

But let me state the fourth point of difference. Necessities are in a class separate from values, and must not be classified with them.

This is one of the discoveries I have made that I think is vital for the world to understand, if it is to make progress and avoid human misery in the process.

Necessities are the means toward essential acts. Given this, then by the first point above, it is morally wrong to withhold necessities from people, because people have a human right to necessities. By the second point, it is immoral for a person to refuse a necessity in order to have enough resources to avail himself of any value. And by the third point, the acts enabled by necessities cannot be classified with the acts enabled by values.

What am I talking about? Benjamin Franklin said, "When the well is dry, we know the worth of water." He was wrong. The proverb should be "When the well is dry, we know that water is beyond worth; when the well is not dry, we know that water is beneath worth."

That is, faced with enough water to stay healthy, we don't want more and more of it (drinking water, that is), except perhaps for security purposes, in case the well goes dry some day. Being hydrated (as the physicians say) is in no sense a goal of ours; it is just something essential for life; and so we have a right as human to drinking water, and enough so that we don't do ourselves damage from thirst. If we don't have enough, then we must give up everything we have to get the amount that will keep us alive. If you are dying of thirst in the desert, and someone has a glass of water and says, "You can have this if you will give me everything you have and all your future income," and if there is no other way to get water, you must agree to his "bargain." Why? Because the alternative is death, and what you are giving up is values--and what good are they to you if you are dead? Hence, the glass of water is worth more than everything you have.

Be careful not to confuse what I am saying with the hypothetical necessity of values. A value is necessary for reaching a given (freely chosen goal), but this "necessity-if" you want the goal is vastly different from what I am talking about. You can't say that water for drinking is a "necessity if" you want to stay alive, because you must "want" to stay alive, in the sense that you are forbidden to choose your death. But life, as I have stressed, is not a goal, but simply essential, and taken as a given to be preserved; and hence, we don't "want" it at all; we have to have it. Similarly, necessities like water are categorically necessary for human beings, not hypothetically so (that is, they just are necessary, not necessary-if something-or-other).

Therefore, the "value" of a necessity is either nothing at all or infinity in comparison with values, just as the importance of an essential act is either nil or infinite; and just as in this case, this means the following:

Conclusion 11: Necessities are of no value; they are neither worthless nor extremely valuable, but are in a different class, unable to be compared with values.

As you can see, this has serious repercussions in economics. In capitalist economics, necessities are classed with values as "very valuable," and the price they command is very high--for the simple reason that those who supply them can demand whatever they please, and they will be paid, up to the limit where greater deprivation occurs because of paying for this necessity. Health care is the most glaring example of this in our country at present. Prices for health care have practically nothing to do with supply and demand, but on what the health-care industry chooses to ask for its services; and the reason is that no one seeks health care because he wants to be better off than he is, but because he needs to be less badly off than he is. If there is something wrong with me, especially something life-threatening, I am at the mercy of those who can correct it; I cannot refuse the service, and I cannot therefore refuse to pay whatever they ask.(2)

So a doctor who says, "You need a heart operation, and that will be sixty thousand dollars up front" is not saying that his time is worth thirty thousand dollars an hour; from the patient's point of view, he is saying, "Give me sixty thousand dollars or die." From the patient's point of view, this statement is the same as the robber who points a gun and says, "Give me your wallet or I'll kill you." The only difference for the patient is that the robber is going to do something that will result in his death, while the doctor is going to avoid something that will prevent the death; but in both cases, if the money is not forthcoming, it's curtains.

But does that mean that doctors must provide their services without compensation? Not at all; this would make them slaves of the people they serve, and would dehumanize them in the process of helping others out of dehumanization. So some compensation is morally necessary for doctors and other providers of necessities, like the people who deliver water to your home. The question is how much.

--And I am going to leave this to the chapter on economics, because the issue is not absolutely simple and straightforward. What I am trying to do here is point out that allowing the marketplace to determine the price of necessities treats necessities as if they were just very valuable values, when in fact this contradicts what both necessities and values are; and transactions of this sort are not "freely entered into" by both parties, any more than the transaction of handing your wallet over to a robber is freely entered into, however much it depended on your choice to do it rather than take your chances at fighting or escaping.

From the point, then, of the person deprived of a necessity, it is infinitely valuable, and cannot be compared with any or even all the values the person has; from the point of the person who has a necessity, it is of no value at all, and is beneath comparison with the values that lead to what is important for him.

Finally, the fifth point of difference is that values, though objective, are relative to the subjective goal of the person who has them; necessities are both objective and relative to the objective humanity of the person.

That is, you can make something not a value simply by giving up the goal it leads to; but since what the necessity leads to is really the maintenance of your human nature as such, you can't give this up, and so you can't get rid of a necessity's being a necessity. There is nothing personal about a necessity, as there is about a value.

Now it is true that relative necessities are related to the particular type of humanity that exists in a given culture; and so they will differ, as I said above, depending on the era and the culture. But this still does not make them subjective in any sense. For instance, the poverty level in a given culture is something objective for the people in that culture, because in fact in that culture "for practical purposes everyone" has enough resources to be able to exercise the range of choice implied in the poverty level, and below that the people in that culture are reduced to simply surviving and are less than human in practice. Hence, even though a television set may not itself be a necessity, and so it is not dehumanization to deprive a person of one; it is objectively dehumanizing in our culture to force a person into such poverty that he couldn't choose to have a television set if he wanted one. It is the financial resources (the money) that is a necessity, not some specific item that the money can buy.

I would like at this point to say a few words about Immanuel Kant and Ayn Rand. Kant's moral dictum that human beings must be regarded as ends and never as means can be seen now to be valid; and in fact, what is behind it is the reason why slavery is morally wrong, even if the person is willing to sell himself into slavery--or even if he would prefer to be a slave and avoid responsibility for his acts.

Since the goal of any value is the realization of some human's idea of what is "true self" is, or in other words, is the humanity of the person who has the value, it follows that one's humanity is by definition the end of any value one has. But it is self-contradictory to treat an end as if it were a means; and hence, since each person's humanity is the end of all his values, it is self-contradictory of him to make it over into a mere means toward some other person's humanity, or for the other person to accept that what is an end, just as he is, shall be a means toward his own end.

Hence, as Kant rightly says, we are to treat each other as a community of ends, no person being subordinated to any other as a means toward the other's personal fulfillment.

But this is not to say that a person's actions can't be used by another person as means toward the other's fulfillment, as long as the personhood of the other is not so used. But what does this mean in practice?

First of all, it means that the person must be willing to act for the sake of the other's fulfillment, and not be forced to do so. Otherwise, the person is dehumanized in that he has become a slave. Secondly, since a person, in acting for another's goal is subordinating his reality to the reality of the other person, some compensation must be given him for this subordination, so that he can somehow (in practice by using the services of still others) bring himself up to where he would have been if he hadn't been wasting his time for the other's sake. If compensation is not given him, it is not his action that is being used, but his reality, and he is a slave.

Note here that a person may not want or may even not accept compensation for his service. In that case, since he is doing it willingly without compensation, it is an act of love on his part; and his goal is precisely the fulfillment of the other's goal as other. This is perfectly consistent with being human(3). I am not objectively any more important than anyone else, and so there is no objective reason why, just because I happen to be the agent for my acts, my own fulfillment has to be their goal.

I am not denying the possibility of love, then. What I am saying is that to force a person to love (to serve oneself without compensation) is dehumanizing; and therefore, if services are demanded of another, compensation must be offered, sufficient to offset the loss the other has incurred in performing the service, (including the loss of time he could have spent pursuing his own goals).

With those qualifications, the actions of another can then be values toward goals a person has; and of course, depending on the importance of the goals, the actions of one person may be more or less valuable than those of another. For instance, I would imagine that the actions of Socrates would be more valuable than those of the ordinary philosophy professor if you wanted to learn what your life was about; and you might find the actions of a teacher of business administration more valuable than those of any philosopher (certainly many of my students do).

It is in this sense that one person's life can be said to be analogously "more valuable" than someone else's. If more people want what this person has done with his life, then the actions of his life (his "life" in the secondary sense, as I defined it in Chapter 7 of Section 1 of the third part 3.1.7) are of more value than the actions of some other person who has done nothing for others.

Conclusion 12: The greater or lesser value of a person's "life" in the sense of the usefulness of his actions has nothing to do with the person himself as being a value. Persons are ends, and must never be treated as means.

That is, the fact that we can call one life more valuable than another is only because we are talking about the "life" in the secondary sense, not life in its primary sense, which is the existence of the living being (the existence which is the unifying energy). In this sense, the being in question is never (if it is a person) to be subordinated to any other person, because this would be the self-contradiction of a life's being the means for a life.

Now the fact that the person is his life (because the life is the existence of the person) is the reason why Ayn Rand and her followers have said that "life" is the objective purpose of all actions; and therefore, there is an objective purpose or goal for each person: the preservation of his life.

This is why Rand's philosophy was first called "egoism" (and why she wrote a book called The Virtue of Selfishness) and was later called "objectivism." But I think it misses the distinction between essential acts and goals.

First of all, I don't think that you can establish that "self-fulfillment" or self-preservation is the goal of living beings' acts as distinguished from inanimate beings' acts. All bodies are so structured, as accidental change shows, that they will return to their ground state if possible when this ground state is disturbed; so there is nothing distinctive in this respect in the living body's preserving itself. The only difference is, really that the living body is preserving an equilibrium which is not its own physico-chemical equilibrium.

Secondly, as both living and inanimate bodies show, this self-preservation is by no means the purpose of all acts of the body. Inanimate bodies undergo substantial change when acted on by energy they can't cope with; and so do living bodies. This is just as "natural" as returning to equilibrium when acted on by lesser amounts of energy. At least, you can't deny it without begging the question, and defining what is natural as what is self-preservative, and what is unnatural as not so. Is it objectively unnatural for hydrogen to destroy itself as such when it combines with oxygen to form water? Or is it the natural thing for it to do when confronted with oxygen? There's no objective answer to this question.

Thirdly, living bodies sometimes--often, in fact--show self-sacrifice for the sake of the species or the offspring and so on. Mother birds will risk danger to themselves to lure predators away from the nest, for instance. There are enough instances of this in the living world that it is by no means clear that self-preservation is the objective purpose of the living body. After all, in the insect world drone bees and the male of black widow spiders by nature sacrifice themselves in the act of mating.

But fourthly, this does not automatically mean that the preservation of the form of life is what is the objective purpose of living bodies. If that were so, then as I pointed out in the section on reproduction in Chapter 6 of Section 1 of the third part 3.1.6, it would be unnatural for living beings to eat the offspring they produce; and yet in many species, this is what normally happens. As I also pointed out there, "life" in the sense of the species is an abstraction, and never exists except as some limited individual case of the form of life (with its matter); and so if the purpose of a living body is the "preservation of life" in the sense of the form of life, this is strange, because it would mean to preserve an abstraction.

And that is why, as you will recall, I defined life as essentially equilibrium and as therefore not having a purpose. It is, and its self-preservation and that mysterious "preservation" that comes through reproduction is at best a pseudo-purpose based on the fact that life is physically and chemically unstable, and existence in equilibrium is not possible (as it is in the inanimate realm) without doing something active about it.

So there is a sense in which it can be said that life presupposes self-preservation, given the physico-chemical instability of the living body; but that does not imply that the preservation of it as life is its goal. The beginning is still not the end; and this is what Rand, I think, missed.

But because, in any human being, the goal chosen will in fact be a definition of the particular "life" that is to be the life of this body (a restriction of it down to being "the person who does this and this and this..."), then Rand is right in saying that, at least in this sense, life is the goal of any human choice. But that's tautological. All that says is that, if you are choosing any goal at all, then by defining what life is to mean for you, you are implicitly choosing your life. Now that will preclude picking as a goal a self-contradiction, for example choosing not to live as your definition of what life is to be for you.

But again, that doesn't mean that "life" in the sense of "the preservation of the form of life" is a goal for you. All it means is that you can't set up a goal as achievable if the goal is in principle unachievable. I can't choose to be a female human being, for instance, though that would "preserve the form of life I have," not because being a woman is not legitimate for a human being, but because what is given in my genes to begin with prevents me from actually being a woman. You see, the self-contradiction comes in choosing something that contradicts what we are given in the beginning, not in the goal as such(4).

Further, it is, as I said, not inconsistent with what is given and with our human nature to choose as a goal the fulfillment of someone else's goal rather than our own, much as Rand might hate the thought of this. She was reacting against the "altruistic" perversion that owes so much to Comte and the Enlightenment that held that self-fulfillment was "selfishness" and somehow bad (with the self-contradictory implication that to do what was good for you was bad for you, because it was good for you alone, whereas to sacrifice yourself--do what was bad for you personally--for the "common good" was somehow supposed to be good for you). It is obviously good for you to do what is good for you, and to seek your own fulfillment. But it is not morally wrong to forego your fulfillment for the sake of another's fulfillment, because objectively you are no more important than anyone else. No one and nothing is objectively more important than anything else, as I said.

It would be immoral to do damage to yourself for the sake of anyone else's fulfillment, because this would be to contradict your given nature for a good purpose; but the end, as we will see, does not justify the means.(5) It is immoral to choose your own harm, just as it is immoral to choose anyone else's harm, because you are no less real or a person than anyone else.(6) Objectively you are just one of the many human beings; and just as they have rights against you, so you have, in a sense, rights against yourself; you can't morally harm yourself any more than you can morally harm anyone else.

But beyond that, just as you need not help anyone else fulfill his particular goal, so you need not pursue any particular goal of your own, and you may morally give uncompensated service to other people. In fact, if you are a parent, you must give uncompensated service to your child, since (a) you caused him to begin to exist, but (b) this implies that his existence is your responsibility as long as he is incapable of existing on his own. And so even if he can't repay you, you have an obligation to nurture him until he can make it as human on his own. Hence, parents must love their children. But beyond this duty, the fact that you initiate your actions does not have any implication that your actions must always be directed to your own fulfillment exclusively.(7)


Notes

1. Which could probably be described as depriving a person of the use of his faculties, or destroying the organ itself that is the faculty.

2. This recently was my experience when the doctors told me that funny pain in my chest meant that two of my arteries were blocked, and I could either have the operation now or wait until I had a full-fledged heart attack and died. Needless to say (since I am writing this), I chose the former. But the point is, what choice did I have? I found out later that my hospital stay was some 62 thousand dollars, all but $750.00 of which my insurance paid. But if it hadn't, I would still have to have the operation, and would have gone deeply into debt to pay for it.

3. In fact, what it does for the lover is make the beloved's reality as defined by the beloved a goal in his own life, and thus in his spirit, he is "with" the beloved, and rejoices in the other's fulfillment and is saddened by his frustration. Since this "withness" is in the will, then it is there eternally; and it is by this that we are not alone after death. We are "with" (in the sense that we know the reality of and share the enjoyment of) all those we care about for their own sake, which in practice means those whose goals we are willing to subordinate our own for.

4. Of course, "sex-change" operations do not in fact change one's sex; one still has (or has not) the y-chromosome in every cell, and the skeleton, musculature, and so on of the sex one is; the making of a pseudo-organ and removal of one's sexual organ (together with artificial hormones) only allows one to pretend that one is the other sex.

5. You can, however, using the Principle of the Double Effect, permit (others to do) harm to yourself, in order to avoid a greater harm to someone else. In this sense, Jesus was moral to allow himself to be crucified to save everyone else who wished it from eternal damnation. But, as Jesus's actions show, he could not morally bring it upon himself. For instance, he remained silent until the legitimate authority asked him point-blank if he were the Messiah, and he answered in the affirmative, as he had to do a) because he was commanded, and b) because it was the truthful answer.

6. When one uses the Double Effect, as we will see, the choice is away from evil, not for it, even though the evil is foreseen. But this needs considerable explanation.

7. 7If selflessness or love is the principal Christian virtue, it does not follow that it is the principal philosophical virtue; and this is where the perversion that Rand was rightly reacting against came in. Love as something we ought to do is, in a philosophical context, a contradiction, because the "ought" implies an obligation, which means that you will be worse off (punished) if you don't do it. But the motive, as we will see in the section on ethics, for doing something that is commanded, is that you know what side your bread is buttered on, and you are trying to avoid personal harm. But that makes your own self and its fulfillment/non fulfillment the purpose of your choice--which is obviously inconsistent with loving. That is, if you love in order to be better off, you are not loving. This is the inconsistency I see in Buddhism, for instance. As a philosophy, it contradicts itself because it wants people to love for the sake of their own fulfillment.

It turns out that if you love, then those you care about are with you eternally, and this is fulfilling (for the person who wants this expansion of his person); but you can't love in order to have this personal fulfillment.

Christianity avoids this dilemma, first of all by providing someone lovable to love: someone who has demonstrably done the utmost in loving you; hence, to love him and to imitate his love is rather a call than a command. Secondly, the Christian command is hypothetical: "If you love me, keep my commandments; and this is my commandment: for you to love each other as I have loved you." That is, you show your love for Jesus by your love for every other human being. Thirdly, the self-sacrifice of Christian love is not the willingness to do damage to yourself; even Jesus prayed not to have to undergo his ordeal "if it was possible," and so only bowed to the inevitable, and did not actively seek it. Further, Jesus' sacrifice was seen in the context of the fact that he would not in fact be destroyed but would come back to life, just as that of his followers looks toward a time when "every tear will be wiped away." The Comtean kind of self-sacrifice is not this sort of thing at all; it is subordination of one's reality to that abstraction called "humanity"; and it was perfectly right that Rand and her followers should contemn it.

But the point is that Christian love makes sense only in the context of the supernatural life that goes along with it; in the natural sense, while love is no less human than self-fulfillment, it is certainly no more human than self-fulfillment; and so there is no pull one way or the other in the natural realm. Given the supernatural life, however, following Jesus means "a hundred times as much in this life and life everlasting"--which still can't be a motive for loving, but certainly, once one chooses this life, means that it makes more sense than the alternative.



Chapter 4

Kinds of values

I mentioned earlier that there was such a thing as a "potential value" and that this would allow us to classify values. It is now time to discuss this a bit further.

A potential value is some aspect of an object that in fact leads to some human activity.

That is, a potential value has as its (natural) purpose, in the sense defined in Chapter 4 of Section 3 of the second part 2.3.4, some activity that could be made a goal for a human being, since it is a human act; and therefore, if the act is made a goal, the aspect is an actual value for the person who has that goal.

Obviously, values--actual or potential--are defined as such by the goals they lead to; and so this allows us to define the different kinds of values there are by listing the various types of human acts that can be made goals for life.

Let me say first of all that any human act can be made a goal for human life, as long as its exercise does not contradict any other aspect of one's reality. For instance, there is nothing wrong with eating simply for the sake of eating (to make the act itself the goal), and not have as its purpose nourishment, as long as one does not make oneself sick or malnourished (including being unhealthily fat) by what or how much one eats. It is all right, as we will see in discussing ethics, to eat something that has no food-value at all; though it contradicts the function of nutrition if you eat and then throw up so that you can't digest it.

By the same token, any human act can be a value toward some other human act (in the person himself or in some other person) as its goal. Even the highest human acts of thinking, for instance, can be values toward, say, teaching someone, or even toward figuring out a way to perform some physical act like weight lifting more efficiently. Whether the act is a goal or a value depends on whether there is an answer to the question, "Why am I doing this?" beyond the simple "Because I want to."

You can, obviously, find out what the goals in your life are by asking the question, "Why am I doing this?" until you can't give an answer any more. And, as I said earlier, you can rank these goals by pairing them off against each other and saying "If I can do only one of these, which one would I do?" For instance, suppose you ask yourself why you watch television, and your answer is "Because I like to." Then it's one of your goals in life. Suppose you also ask yourself why you play racquetball, and your answer is that you want to keep yourself healthy; then it's a value for being healthy; but then when you ask yourself why you want to be healthy, you say, "Because I want to be," then health is another goal of your life. Thus, you find that watching television, being healthy, knowing philosophy, eating éclairs, painting pictures, are all goals for your life.

Now to find out importance, you ask yourself, "If I can't both watch TV and eat an éclair, which would I do?" If the answer is "eat the éclair," then you would ask, "If I can't both eat it and read philosophy, which would I do?"--and I would hope the answer is "read philosophy." And so on. If you wanted to, you could find all of the fourteen thousand three hundred fifty-two acts (or whatever) that you do for their own sake and not any further purpose, and you could rank them all against each other, so that you could list them in order of importance from one to fourteen thousand three hundred fifty-two.

With that said, what kind of (potential) values are there?

First, there are physical values: those things which enable a person to perform physical acts well, or to have a certain appearance of body. The actual acts as goals would be classified under exercise if they have no further purpose--for instance, if you run, not to be healthy or to have a good looking body, but just because you like to run. If you want to "be in shape" as your purpose in running or from exercise then what you want is a certain kind of body; and the exercise then is a value for this goal, not a goal in itself. Physical play is performing physical acts for their own sake, with no real purpose except the act itself; when the acts have a further purpose, you are no longer playing. Obviously, equipment that is used in exercise or play is a value for it, and so it would be a physical value.

Exercise can also be a value for looking good, and this would still be a physical value. Obviously, clothes and cosmetics are physical values in this sense also. Possessions, as a kind of extension of one's physical reality into the inanimate realm, are physical values, because they enable various physical acts that we can't perform without them.

There is nothing wrong with having as a goal in life looking good, and of making it even a very important goal. We tend to think of it as "vanity," but after all, it is your body, and if its disgraceful to live in a house that is a mess, and desirable to have a house or a desk that is neat and pleasing to people's eyes, then by the same token to turn yourself into a body that is an eyesore is hardly charitable, and why shouldn't you want to look as pleasant as you can?

Another goal for physical values would be health of the body. This isn't exactly a biological value, because it involves the physical condition of the body, not the acts of life; your body is physically such that it can perform with ease any of the acts you ask of it, and is not hampered in your exercise of your genetic potential by anything from within it.

Actually, health as the ability to do with ease any act within your genetic potential is a value, whose goal is the acts in question; but it is still true that it involves a certain state of the body; and this state as a state can be a goal, and need not be solely for the sake of the acts. In that sense, as a goal, it is the perfection of the body as a human body.

Health is generally regarded as a kind of necessary act: the minimum below which you are unhealthy (can't act up to your genetic potential). But there are obviously levels of health, and what I would mean by health as a goal would be "being fit." Again, it is perfectly legitimate to make this one of the goals of your life, and to take as values eating the right foods and doing the right amount of exercise and so on that lead to this goal.

Health is not exactly the same thing as "being in shape," in the sense that weight lifters speak of it; because very often what they are talking about is either looking good or being very strong. Again, these are legitimate goals, as long as the quest for musculature and strength does not contradict being healthy, as it does if one takes steroids. Steroids are a value for being strong, but a disvalue for being healthy, because in the long run they damage the body; and hence they must be avoided.

All of these goals are regarded as pretty "lowly" and not worth having; but this is prejudice, I think. True, they are the least spiritual of our human acts; but that does not mean that they are worse than our other acts, and are to be avoided. Those who "mortify the flesh" and neglect their bodies in the name of the "spiritual life" are doing what is morally wrong, because we are not angels, we are embodied spirits, and the spirit is also as one and the same act the unifying energy of the body, which builds the body. The counter-immorality, of course, comes from being so interested in appearance, health, or strength that one refuses even a minimal sort of development of one's mind.

My own personal view with respect to all of this is that my Master gave me this body, and I want to give it back in as good condition as I can. And if "mortification of the flesh" is a value for Christians (as it is, since it is a demonstration of caring for the beloved more than one's own comfort), it is plenty "mortifying" to toil at those Nautilus torture-machines that can get your body into such superb shape.

The second class of values is that of biological values. These obviously enable or make easy the vegetative acts of nutrition and reproduction. Eating and sex can be goals in themselves, because, as I said with eating, they can be done for their own sake and for no further purpose, as long as the function of the faculty is not contradicted in the process.

The biological acts of growth or repair of injury can't be goals, because first of all growth is a process, and is automatic, and so is not subject to our choice; and secondly, repair of injuries obviously is getting back from a damaged condition and so is not a goal to be striven for. But eating and sex are acts in their own right, and so can be chosen as such as well as for the effects they have because of their biological function.

It was held by St. Augustine that it was immoral to have sex except as a value for reproduction, because to exercise the act as an end in itself, he thought, contradicted it as reproductive. He was wrong; if he weren't, it would be immoral to have sex after menopause, which is scarcely an opinion that has been widespread among ethicians. It is also not immoral to have sex because it feels nice, making the biological act a value for the sensitive goal of the feeling--again, as long as none of the other aspects of sex or the persons involved are contradicted.

Without going into the morality of sex here (we will see it later), the reason it is not immoral to use sex just for the act is that even in itself not every act of sex does produce offspring; not even every act during the fertile period of the woman, as women who are trying to have children can testify to their sorrow. Hence, it is not contrary to the nature of the act if it is performed and does not result in a child. It would contradict the act if, in its exercise, you did something to make it impossible for it to result in a child when it could result in a child--as, for instance, using a condom, which obviously makes a reproductive act a non-reproductive sort of act; or even to use a pill to prevent fertility when fertile so that an act which could result in a child can't when it can. But as long as something is not done to change the type of act you are performing, then you can do it for its own sake and not necessarily for its effects (of course, you would have to accept the effect--the child--if he occurs; the point is that you don't have to have him as your goal).

The values connected with these goals would be the different types of foods and so on in eating, and books on sexual techniques and what are called "sexual aids" in the case of sex.

The third class of goals and values obviously would be connected with sensations, where a sensation of some sort is the goal and what produces the sensation is the value. Aristotle mentions that even animals sometimes just look at things apparently for the sake of seeing, and certainly humans look at sights just to see them, making the sensation itself the goal of the act. Clearly, as he also says (in fact, it is in the introduction to the Metaphysics, where he is giving the evidence that we have a natural desire to "know"), we can use sensations as values for understanding or for action; but we can also choose them just for their own sake.

Emotions or "feelings" are a little bit peculiar among sensations, since they are connected with instinct as the built-in program linking information to behavioral response; and as the consciousness of this program, the pleasant emotions are, as it were, incentives for the act in question, as the unpleasant ones are incentives to avoid it. It would seem a little odd, therefore, to perform the act for the sake of the emotion that was supposed to induce it; it sounds like putting the cart before the horse; and philosophers like St. Augustine thought that this was contradicting the natural order of things, and was immoral.

But just as you can use understanding (a spiritual act) as a value for performing physical acts or for biological purposes like finding out your biological equilibrium and figuring out what to eat to stay there, so you can use any human act as an end and any other one as a value for that end, as I said at the beginning of this chapter. Hence, it is legitimate to make the sensation of pleasure from eating the goal of the act of eating and not use the pleasure only as a means to get you to nourish yourself properly.

It is a mistake, as I said in discussing instinct in Chapter 4 of Section 2 of the third part 3.2.4, to say that in animals, the feeling is the incentive to perform the act in question, as if animals ate because the food tasted good and avoided eating something because it tasted bad. To do this, they would have to be able to know relationships as such and set goals for themselves, which would mean that they would be understanding, not living on the level of instinct. In animals, the sensation occurs in conjunction with the operation of the particular drive in question, and is not an incentive to do it at all, but merely an epiphenomenon of it. The animal feels the emotion, but the feeling does not induce it to perform the act; the feeling is just there, as a gratuitous addition to the act. It is only humans who can use emotions as incentives or not.

And actually, when a human being uses an emotion (or rather, the anticipation of an emotion) as the incentive for choosing an act, he really has the emotion as the end, and the act as a means toward it; because as a motive (and this is what you mean by an "incentive" in the context of a choice), it is the chosen effect. Hence, if the intention of "nature" is that emotions are incentives to acts, and are to motivate us to perform the act, this implies that for us it is natural to have the emotion as the end and the act as the means; so far from being unnatural, it is exactly what "nature" intended--though what it "wants" by this is the guarantee that the act will be performed.

So it is by no means a reversion to the natural order of things to merely permit, as it were, the pleasure connected with an act and to try to make the act itself the goal of the pleasure; it is perfectly legitimate to have the pleasure as the goal and to make the act a value for the sensation. Thus, for example, you can eat because of the taste, and you can have sex because of the pleasure, prescinding from the biological function of each. Of course, you can also make the act or its biological effect your goal and take the pleasure as a help in performing it; what I am saying is that the former is as legitimate as the latter. In fact, it is legitimate to have both the act and its effects and the sensation your goal, so that none of them are subordinated to each other as means to end, but all are coordinate goals.

It is sometimes considered Christian to eschew the "pleasures of the flesh" for the sake of the "true pleasures of the contemplation of God"; but this is actually a rather Manichean and unchristian way of looking at things, and is more Stoic than Christian. Christianity, especially with its emphasis on the Resurrection of the body, is not one of those "spiritualist" religions that holds the body in contempt, however much certain Christians historically may have done so. Ironically, St. Augustine is (with some justification) looked on as one of the foremost of the "contemners of the flesh" because of what he said about sex; and yet he was the one who fought the dualistic view of humanity that Manicheanism held, and who therefore realized that the body and what belongs to it is good, not evil.

Even unpleasant emotions can be made goals, when the idea is to experience them just for the sake of the sensation. I have mentioned this several times, for instance in the section on the problem of evil in Chapter 12 of Section 5 of the first part 1.5.12, where I said that we take roller coaster rides to experience in a safe context the fear of falling from great heights; and we watch horror films to experience various other frights; and I suppose we watch violent films to experience the disgust of seeing someone's entrails being splattered over the pavement in a context where we know no harm is actually being done (though personally, I don't have this emotion as a goal for my life, at least at the moment, and so can sympathize only in the abstract with those who do so. It would be immoral, by the way, to enjoy such sights in the sense of wanting to do them if you could get away with it; that is in effect choosing the evil itself.).

With respect to using emotions or sensations as goals, this is not the same, as I was at pains to point out two sections ago, as the esthetic experience, because in the esthetic experience, the emotions are part of an intellectual experience. What I am talking about here is just feeling the emotions--or any sensations--for their own sake, without including them in or using them for anything beyond themselves. This is perfectly legitimate, and consistent with being human. Thus, when horrible things are seen in a tragedy, the unpleasant emotions enable one to understand a truth about life that could not be understood any other way. When used in this way, of course, the emotions are values, and the understanding is the goal.

Obviously, pictures, sounds, tastes, things that can be felt like velvet and silk, perfumes, and so on, as well as the machines and films and acts that produce the emotions I was talking about are the values that are sensitive values.

The fourth category of goals and values are the intellectual ones; and here, either perceptive or esthetic understanding is the goal. I am inclined to think with Aristotle that choosing is not itself a goal, since it concerns itself with an act to be performed. St. Thomas, who held that love was an act of the will, and who also held, as a Christian of the time, that love was the "objective greatest good" for a human being, thought that "possession of the beloved object" was an act of the will, and therefore the act of the will could be an end.

Since for me there are not separate faculties of "intellect" and "will," but rather the act of the spirit's determining itself (understanding, which involves, as I said in Chapter 2 of Section 3 of the third part 3.3.2, a kind of choosing), and the spirit's determining the whole person (which also involves understanding), there isn't really this dilemma. If you are choosing, in my way of looking at things, you are using your spirit's act precisely as a value for some goal in your person as a whole; if you are simply using the spirit's act as an end in itself, you are understanding. In that sense, choosing is subordinate to understanding; but this does not mean that the "will" is subordinate to the "intellect," since both are one.

And in enjoying the happiness of one's beloved, what you are doing is understanding the fulfillment of your goal in her happiness, because as a matter of fact this was the goal of your loving choices connected with her. Hence, happiness is an intellectual act, more of the nature of understanding than choosing.

In one sense, any act chosen as a goal is always going to involve understanding in its fulfillment; because the success of performing the act, if it is unknown, does not satisfy the spirit which chose it. Hence, happiness is always an act of understanding: the understanding that success (the performing of the act which is the goal) is achieved; but the act which is being performed is not necessarily understanding.

Here is the distinction the Scholastics make between the finis qui (the "act which" is the end in question) and the finis quo (the "act by which" the end is grasped--known--as the end). The former is any one of the goals we have talked about, and if you are talking about "personal fulfillment," is the whole set of them; This is success. The latter is the understanding that the goals have been achieved; and this is happiness.

Obviously, since it is always possible to lose a goal in this life, then, as I said in Chapter 4 of Section 4 of the third part 3.4.4, there is always a further goal of hanging onto the success you have achieved; and so complete happiness can only come after death, where success cannot be lost and we know this.

With that said, it is also the case that certain specific acts of understanding, either perceptive understanding or esthetic understanding or both, can be goals in themselves, and sought just for their own sake and not for "what you can do with them." In this case, the means that allow you to have these acts--courses of study, paintings, symphonies, and so on--are the values and the acts you perform with their help are the goals. Cicero has an eloquent speech (aren't all of his speeches eloquent?) on how esthetic understanding (though he didn't call it that) can be an end in itself in his defense of the poet Archias.

One need not justify knowledge or any other human act in terms of "what you can do with it." After all, the ultimate goal of any human act (if it has one) must always be some other human act; and so asking for justification for some human act is merely to say that you have a different value system. This, I think, needs constant stressing, because each of us can see no point in performing acts for their own sake if we don't happen to have those acts as goals in our lives. Each of us also has the idea that our hierarchy of goals is what is "really important" in life; and we can usually give reasons why this is so--as Aristotle and the early Christians did in saying that intellectual contemplation of God (or in Aristotle's case the gods) is the highest act we can perform and therefore is "the ultimate objective good" or the "real goal" for mankind as such. This confuses, as I have said so often, lack of limitation with goodness.

But there are some things that are not goals and need to be justified in terms of "what you can do with them." These are pure values, which have no meaning or reality in themselves, and exist only in relation to what they are for.

I mentioned health as opposed to the perfection of the body. Since health by definition means having nothing internal to prevent you from doing the acts implied in your genetic potential, then obviously in this sense it is a pure value: an abstraction whose meaning is ability to do whatever it relates to, and not some definite act in its own right.

Time is another pure value, because, as we saw in Chapter 6 of Section 3 of the second part 2.3.6, it doesn't exist as such, and is simply a relation among the quantities of a process or the quantities of compared processes. Time is a value in the sense of "the time to do something," which means once again the ability to do it because other commitments don't prevent it.

Doing nothing in the sense of resting is a pure value; because as inactivity it is precisely non-reality and as such it is impossible; hence, it has meaning only in relation to some specific thing you are not doing. It is therefore either the avoidance of something bad, and has meaning in this sense as a necessity, without any positive significance; or it is resting in order to marshal one's forces for the sake of the better use of values or enjoyment of goals. It is a perversion of rest to regard it as a goal, because then what one is seeking is non-existence as an end.

The same sort of thing can be said of freedom. All freedom is is the ability to do a number of different acts, but insofar as you are free, you are not doing the acts that are open to you. If you have a box of chocolates and a piece of cake, you are free to eat either of them; if you eat the cake, you are no longer free to eat it--and as long as you remain free, you are eating neither one.

Hence, freedom has no meaning in itself, and therefore is not a goal; and those who seek to be free or to keep their freedom are like the person who chooses "doing nothing" as a goal; what they both want is non-existence. In the case of the person who wants to "stay free," however, he wants non-existence with the further contradiction of its being non-existence open to various acts, which (insofar as he wants to stay free) he does not want to perform.

Finally, money is another pure value. Since, for its possessor, as we will see in the next section, it is a certain quantity of the freedom to use others' services in pursuing one's own goals, then insofar as one hoards it without spending it, one hoards bare possibility of acting without the action.

Now of course, money can also be used to ward off harm, and so a certain reserve can function as security, which, in our insecure world, can be a kind of goal. This, in fact, is what misers are after when they hoard money. But it is not perfect security, and the obsession with security (i.e. equilibrium in this life) doesn't recognize how this life is structured; we can't be ultimately secure, because our body wears out and we die. Hence, money is a value for security; but only a certain amount of money will bring a reasonable security, and to look on a hoard of wealth as security itself is an illusion.

Beyond that, money exists to be spent, and has no existence in itself. Most of the money we have, in fact, is nothing but a number in somebody's account book, and has not even any physical existence at all.

But we will see this more at length in the next section and the next part. For now, let me simply note that all these "pure" values are are various ways in which we are able to do something; they are potencies or powers, and powers as such have no meaning except in relation to the acts they are abilities to do; and the ability to do something is really what a value is.