THE QUEST FOR THE CAUSE
[The material for this chapter can also be found in Modes of the Finite, Part 4, Section 4, Chapters 3 and 4.]
3.1. Third step: experiment
Given that the scientist cannot logically prove his hypothesis to be the one that actually states the cause, and given that the logic of the situation only allows for falsification (by showing that some fact or other is left unaccounted for by the hypothesis) we would expect scientific method from this point on to be a series of attempts to falsify various hypotheses, so that they could be eliminated--with the hope of eliminating all but one, and so coming to Sherlock Holmes' famous statement, "When you have eliminated all other possibilities, my dear Watson, the one remaining, however improbable, must be the truth."
And, in fact, that is what scientific method does. First, it checks to see if the hypothesis in question does in fact explain all the data that were originally observed; if not, then there is a piece of the world that this hypothesis leaves contradictory, which means that it can't be the true explanation of the effect. This is the "experiment" stage of scientific investigation.
DEFINITION: A scientific EXPERIMENT is a procedure set up to determine if the hypothesis actually does explain the effect as observed.
What is done here, usually, is to vary aspects of of the affected object and the supposed "causer" (whatever it is that contains what is hypothesized to be the cause), in order to find out whether the causality still "works."
This is a kind of concrete attempt at abstraction, so to speak. You can't actually remove the effect from the affected object, or the cause from the causer; but if you vary the situation, changing what you suppose to be irrelevant aspects of it, then the sum of the experiments will leave constant only the various aspects you thought to be involved in the effect and cause. You can then find out if the aspects of the "cause" still make the effect what it is.
This is a little harder to describe abstractly than it works out to be in practice.
Suppose you notice that people breathe faster after they have been running, and you are curious as to what running has to do with breathing, since you use your legs for running, not your lungs. Well, of course, you know that you burn energy when you run, and also, the cells in your body get filled with the waste products of energy. You also know that breathing supplies oxygen to the blood (which is what is used in burning, even in the body), and, as you breathe out, removes carbon dioxide from the blood (which is a waste product). But is it that when you run you need more oxygen, or when you run you need to get rid of more waste, or both?
The best guess is, of course, that it is both. But let us say that you make as your first hypothesis that the body needs more oxygen from the lungs. To test this, you put a person on a treadmill and let him run, and watch his rate of breathing after fifteen minutes. You let him rest and do this several times so that you have a good idea of what happens to his rate of breathing normally after fifteen minutes of running. Then you make him run for fifteen minutes with an oxygen mask on, breathing pure oxygen. If afterwards, he breathes just as fast as he did without the mask on, then it can't be that he's breathing faster because he needs more oxygen--in which case, the sole cause of breathing faster is getting rid of the waste accumulated from the running. You have falsified your hypothesis.
If, on the other hand, he breathes after this experiment at the same rate he breathed when resting, then you have by implication eliminated the hypothesis that the faster breathing had do do with a faster elimination of built-up waste in the blood; all he needed was more oxygen, which he got during the running by the oxygen mask.
If he breathes somewhat faster than normal, but not as fast as he did when not supplied with extra oxygen, then your original guess was probably on the right track; he evidently needed more oxygen, since, getting it, there was a "component" of his breathing faster which was accounted for in the slower "fast breathing"; but he needed more than just to receive oxygen, because he still did breathe somewhat faster afterwards.
Of course, in this case, it is still possible that he needs so much oxygen that it can't be supplied even if he breathes pure oxygen when running. So now you will have to devise another experiment which will vary the ability to eliminate carbon dioxide, to see if that enters into the situation or not.
And so on. But all of these ingenious devices are simply to test whether the hypothesis does explain the effect you originally observed. As is obvious, if the effect is at all complicated, this stage of investigation can take decades.
But when all is said and done, what has really been accomplished with a successful set of experiments is that it is not proved that the hypothesis is FALSE. There still are an infinity of explanations that could be true.
For example, it might be that the faster breathing is really a kind of reflex connected with strenuous motions of the legs, the way a bird's neck bobs forward when it walks; and while it is at it, this happens to supply extra oxygen to the muscles, which happen to be what is needed because of the strenuous exertion. It doesn't sound very likely, but it has to be eliminated, somehow.
And this sort of thing can be very significant. The tobacco companies, for instance, are putting forward the hypothesis that the greater incidence of lung cancer found among smokers is due not to the smoking, but to the fact that people who are prone to lung cancer happen to be more inclined to smoke than people who aren't. That is, the lung cancer, they say, isn't caused by the smoking, but both the lung cancer and the desire to smoke are (independent) effects of some cause in the metabolism or genetic makeup of the smoker (or lung cancer victim). Not a likely hypothesis--but again, one which must be eliminated somehow for it to be proved that smoking causes lung cancer.
So we haven't really proved anything, even with a successful set of experiments. We just have reason to believe that... We have an explanation that does explain the data we observed, but is not the only possible explanation, and so might not be the true one.
DEFINITION: SPECULATION is the discovering of an explanation for a given effect.
DEFINITION: Speculation is SCIENTIFIC SPECULATION if the explanation is checked to see that it is (a) internally consistent, and (b) that it does indeed explain all the observed details of the effect.
Pure speculation or "airy speculation" doesn't bother to do any checking at all; it just comes up with something that "stands to reason." It says, "The cause of teen-age pregnancy is that there's not enough education about contraception." This would imply, of course, that those who knew about contraception woudn't get pregnant; but the pure speculator doesn't bother to find out if this is the case, since it "stands to reason" that the explanation of why teen agers get pregnant is that they don't know how to avoid it. But of course, they might also want to get pregnant (for instance to tie the boy to them as the father, or even to have a child by him whether tied or not), or they might get pregnant because "decent girls" don't plan to have sex next Monday night, and so they don't take the Pill until it's too late for it to work--and so on.
This kind of thing goes on all the time; but it's not scientific. The pure speculator finds an explanation that satisfies his mind; and from then on, don't bother him with facts, because he's satisfied himself. If his explanation doesn't explain all the data, it explains enough of it to put his curiosity at rest, and he considers the scientists, who say, "Well, yes, but might it not also be that..." to be kooks who like to waste their time belaboring the obvious.
That there might be something to say for the scientific type of curiosity that isn't satisfied with any old explanation can be seen from the example above of teen-age pregnancy. Suppose ignorance is only one factor in the problem, and not the most important one at that. The speculators above will now spend lots of money establishing "pregnancy-avoidance clinics" in schools and so on, where they will make contraceptive information and contraceptives available to teen agers.
Suppose, however, the major factor in the problem is that teen agers get the idea that it's not really a good thing to have sex whenever you feel like it, and so sleeping around is not "really right." These clinics, however, say that there's nothing really so terrible about sleeping around; it's getting pregnant that's the disaster. Then the kids are pulled in two directions: don't do it anyway; but it's really okay as long as you don't get pregnant. This means that the ones who want to do the right thing, and who consider themselves as "decent" will not take the precautions which imply that they're actually planning to be promiscuous; and so they'll go out on dates without protection. But when they get alone together, and the sexual urge gets strong, the other side of the equation of "there's nothing really bad about this" will influence them and they'll have sex--in a situation where they're likely to get pregnant.
But if this is an important explanation of teen-age pregnancy, then "pregnancy avoidance clinics" (based on what "stands to reason") are actually calculated to increase teen pregnancy, not decrease it.
Tell this, however, to the speculators, and they will shout you down as "restricting reproductive freedom" or something.
The point here, of course, is that it isn't just the scientist who finds explanations; we all do. The difference is that the scientist wants to find the cause; and we assume that any satisfying explanation is the "cause."
3.1.2. Thought experiments
Sometimes it is not possible actually to perform experiments, because the conditions under which they would have to be done are so extreme as to rule them out. In these cases, scientists sometimes resort to a kind of speculation called a "thought experiment" or a "gedanken experiment" (from the German word for what is thought).
Here, the known properties of what you are working with are "extrapolated" (carried farther than observed) into the conditions under which the experiment would take place, and then, based on what you know from observation and can assume from your extrapolation, you consider what the objects would probably be doing in these conditions, provided you could get them there. If they result in the observed data, then this is taken to be some indication that your hypothesis is not false; and if they don't, then this is again more than a hint that something is wrong with it--but often not much more than a hint.
The problem is that the behavior of things doesn't necessarily follow your extrapolation. For instance, as you lower temperature of electrically conducting materials, the degree to which they conduct electricity decreases (i.e. their resistance increases). You would therefore suppose that at very low temperatures, electrical resistance would be extremely high. But once people were able to get extremely low temperatures (close to "absolute zero"), it was found that certain metals and so on became "superconducting," as if their resistance dropped almost to zero.
Any thought-experiment, then, based on extrapolation of the tendency of materials to resist current based on temperature would be wildly off the mark. And this sort of thing is always possible. Hence, thought-experiments have to be (and usually are) taken with a large grain of salt. But if nothing better is available, they can be useful.
3.2. Science and mathematics
Of course, as I mentioned, when the scientist gets through his experiment, he's still only in speculation, though it's better speculation than the ordinary person's. But before we go on to a further step in the search for the cause, let us pause and consider why measurement and mathematics play such a heavy role in scientific investigation.
Measurement is important in science because, (a) if the objects can be measured, then this is an aspect of them which may enter into the effect and the cause; and (b) even in other cases, measurement can allow finer variations than mere qualitative ones.
In the example of teen pregnancy above, for instance, you can't measure attitudes. But you could take polls (if you're careful) among teens who got pregnant to find out what percentage of them knew anything about contraception.
The actual percentage is not terribly significant in itself; but if, say, only ten percent of them knew what you were talking about when you mentioned contraceptives, then the "ignorance" hypothesis is much stronger than if half or two thirds of them knew about it. It isn't the numbers themselves which do the job, but the numbers allow you to have a control over something which otherwise is apt to slip into the "it stands to reason" category.
Measurement, however, can become a fetish which actually gets in the way of scientific investigation. It sometimes is the case that numerical results that mean nothing are taken as "fact" because they are numerical.
For instance, one college I know once had students rate faculty on various aspects of teaching performance. The scale was one to five, from "bad" to "good," with three being "average." All of the answers were then added up for each professor and divided by the number of questions, so that the professor got an "average" evaluation from each student. Say, one student's "average" came out to 3.6. Then each student's "average evaluation" was "averaged" with the other students', so that the professor got an "average evaluation" of the class as a whole; say, 4.2. Then this average was averaged in with the average average of his other classes; so that the professor's "final average evaluation" of the semester turned out to be 4.1, say--meaning that the "average student" he had that semester, if he had answered all of the questions with the same number, would have given him a 4.1, which is a little to the "excellent" side of "good."
Then this average was compared with the averages the other professors got in this same semester, and the "average average average average" for all the professors (Jones got 4.1, Smith got 4.3, Doe got 4.0, etc; the average coming out to, say, 4.2) was arrived. at.
Our professor is now compared with this "average professor's average average), and it turns out that with his 4.1 he is below it; and it is now "scientifically concluded" that he is a below average teacher. This in spite of the fact that his students' evaluation of him was to the excellent side of good--definitely above average.
Here you have the manipulation of numbers leading you into a never-never land where the conclusion directly contradicts any meaning the original data had. They were subjective evaluations in the first place, and they are supposed to give you some kind of "objectivity" because you attached numbers to them (this is false). Then, once you have the numbers, you can manipulate them in all sorts of interesting ways; but the manipulation has nothing to do with anything that the original "measurements" corresponded to; and the "objective conclusion" you come up with is simply a wild flight of fancy.
IQ tests are notorious for this. Stephen Jay Gould has written a whole book called The Mismeasure of Man which shows what a mess you can get into when you take numbers, however arrived at, as "objective" and "factual," especially with things like IQ tests.
There is a good deal of airy speculation that involves numbers, in other words, and has nothing to do with true science.
Numbers are not magic. Not everything has to be in terms of numbers to be scientific, and not everything that involves numbers--even complicated uses of numbers--is scientific.
3.2.1. The logic of mathematics
The other reason why mathematics is useful in science is connected with the silly conclusions drawn in the evaluative process above. Mathematics allows one to manipulate numbers logically; and when these numbers represent a real aspect of something, then the logic of mathematics can sometimes reveal the way things behave.
There is also the fact that mathematics is a kind of logic that allows for "inverse operations," so that mathematical problems can be worked in opposite directions. That is, the inverse operation of 2 + 2 = 4 is 4 - 2 = 2; the inverse operation of 3 x 2 = 6 is 6/2 = 3, and so on. You can take the answer of one operation, perform the reverse operation with it, and get back one of the original premises.
This is very handy in science, because you are starting with the effect, which, as I said, is the "then" clause of the "if-then" statement--or, if you will, is the "answer" to your explanatory statement. If you couch your "if-then" statement in mathematical terms, then, it is sometimes possible to arrive at the cause by using the proper inverse operation. You may not know much about it; but you've got a track to use to find it.
For instance, the calculus starts with a derivative: dx/dy = some function. Antiderivation (which is not the same as integration, for reasons we don't have to explore here) will give you a whole family of equations whose derivatives are all the one in question. This sounds very like the infinity of explanations for a given effect--and not surprisingly so. But what you learn with this family of equations is that they all have certain properties in common and differ only in what is called the "constant of integration." And if you integrate over a certain range, you get a "definite integral" which tells you what was going on in an interval within which you observed your effect.
Interestingly, the calculus was simultaneously (and independently) invented by Gottfied Leibniz, who was interested in it for mathematical and metaphysical reasons, and Isaac Newton, who needed it for solutions to physical problems. Mathematicians are not really terribly concerned with the applications in science, but only in the logic involved; but for scientists, problems of the consistency of the mathematics are secondary to the fact that it works--and it works not only from cause to effect, but from effect to cause.
3.2.2. Probability and statistics
But let us look at one mathematical tool and investigate why it works--because it looks on the face of it as if it shouldn't; but it is very useful in science. And it turns out that we can perform a couple of thought experiments and show that our own theory of science forms the best explanation of why it works. I am referring to the mathematics of probability, and especially to its inverse operation: statistics.
The effect here is how there can be laws of probability. A law indicates a constant, non-random relationship; but probability deals with objects that behave randomly. How can randomness be non-random? There are several explanations, two of which we will test as hypotheses, using a die (one of a pair of dice--one for simplicity) as an object for thought-experiments.
First hypothesis: The laws of probability indicate that what you thought was behaving randomly actually wasn't.
This would mean that when you throw the die, the operation of the laws of probability indicates that you thought that you weren't throwing the die randomly, but you really were.
We can test this by giving you a loaded die. Here we know that one side of the die is favored so that it winds up on the bottom, making it not completely random which side will end up there.
But in this case (let us say the die is weighted so that the four is comes out on top), the four comes out more often than you would expect from the laws of probability. So when the throws aren't random, then the laws don't work. Hence, there must be randomness for the laws to operate, and this hypothesis is ruled out.
Second hypothesis: The non-randomness is due to the constant underlying structure of what operates (otherwise) randomly.
What this hypothesis says is that the operations themselves are random; but the underlying structure of what is operating produces constraints on these operations, which give them a quasi-systematic character--i.e. prevents the operations from being totally random.
If we roll our die, for instance, we find that the laws of probability predict that the four will come out on top one-sixth of the time--and there are six faces to the die. If we flip a coin, heads is predicted half the time--and there are two sides to the coin. This suggests that the total number of sides (i.e. possibilities) and the side that comes up (the possibility which is realized) are related, and it is the fact that the total number of possibilities remains constant that makes the laws work.
Let us now perform a thought-experiment. Let us make a die out of soft clay or something which will be deformed when it hits the table as we roll it. We form it into a die and put the spots on it; but each time we roll it, it gets a new "face" when it hits the table, so that you can't predict how many faces it will wind up with on each roll--the number could vary from one (if it makes itself into a ball), to two (if it makes a lens), right up to infinity (the ball again). Now, what percentage of the time is any spot we put on it going to come out on top? You can't predict anything. So this experiment confirms the hypothesis.
Note, however, that if you form a die (supposing you could do so--this is the neat thing about thought-experiments) that could vary randomly in number of faces on every throw from three to six, but could never be fewer than three or more than six, you could now get a probability relation. Not to bore you with the mathematics, the reasoning would be like this: one-fourth of the time, there will be three faces, one-fourth, four faces, one-fourth five, and one-fourth, six faces; on the one-fourth of the time there are three faces, one-third of those will have our special side on top; on the one-fourth when there are four faces, one-fourth of those will have our face on top--and so on. Multiplying these individual probabilities together will give you a number which is the probability for the face's being on top in the "real" die.
But this is consistent with the hypothesis. The fact that the underlying structure of the die makes the variation in faces constant, puts a constraint on the total number of possibilities that can be realized in the random operations, and prevents them from being absolutely random.
So it seems that the laws of probability DO need random operations, but the "lawfulness" is due NOT to the randomness, but to SOMETHING THAT PREVENTS the randomness from being absolute.
If you can find this constant underlying structure, you can predict that the random operations will not systematically vary from a given ratio between the total number of possibilities and the number of attempts at realizing one of them.
What do I mean "will not systematically vary"? This is a more technical way of saying "in the long run will come out to" the ratio in question. That is, there may be deviations from this ratio that are large; but they will tend to be "compensated for" by deviations in the other direction--but not in any systematic way. Thus, if you flip a coin, you might get twenty heads in a row; but as you keep flipping, you will find that you get more tails than heads, until the total number of times heads comes up (as the number of flips becomes very large) approaches half the number of throws.
This, by the way, is not mathematically necessary. All the mathematics says, (taking the die as an example) is that there is no greater likelihood of the four coming on top than any of the other six faces; and this ratio one side to the total is one out of six. It then suggests that, since there isn't a greater likelihood, then there might be a parallel ratio between the number of times the four comes up and the number of rolls.
But there is no reason why this second ratio would have to be the case if the first one is--no mathematical or logical reason, that is. There is nothing logically to prevent its being the case that the rolls be totally random (not converging on any ratio at all, the way our soft die behaved). It "stands to reason" that the ratio would appear; but it isn't proved by that that it would have to. True, no other ratio would be logically allowed; but total randomness is not excluded by the mathematics itself.
It turns out, however, that experiments with actual objects tend to verify this prediction, and hence, we can say that the theory we have above explains a constancy in otherwise random operations. So probability is not actually a "mathematical" law at all; it is an empirical law that has a mathematical foundation--there is nothing, in other words, in the mathematics itself that says it has to work out in the real world; but we investigate and find that it just does.
The reason I stress this is that some think that the laws of probability are "purely mathematical" and work for that reason. But there are others who think that what "stands to reason" has to work out in practice. Both are wrong.
The so-called "law of averages" shows the latter fallacy. This is what happens when you're flipping a coin and you've got a long string of heads--say, heads has come up twenty times in a row. Would you bet on heads the next time? Well, it's very unlikely that heads would come up twenty-one times in a row; the "law of averages" says that tails has to start coming up soon to make up for the twenty heads you got in a row. So it's more likely that you'll get tails this time than heads, right?
Wrong. There is a fifty-fifty chance that you'll get heads just as on any flip. Why? (a) The coin doesn't know that twenty heads have come up. "Yes, you say, but twenty-one heads in a row is much more unlikely than even twenty." (b) But given the (very unlikely) event of twenty, then the mathematics of the laws of probability works out that it is now just as likely that there will be twenty-one as that the run will stop at twenty.
The laws of probability state that it is very unlikely that twenty-one heads will come up in a row; even more unlikely than that twenty will. The "law of averages" says that it "stands to reason" that if twenty have come up, it is more likely that tails will come up the twenty-first time. Both laws "stand to reason" and neither says what has to be the case; but the laws of probability do describe what goes on in the real world and the "law of averages" doesn't.
What then have we done?
The "constant underlying structure" is the CAUSE explaining why the laws of probability work.
But now let us briefly look at statistics. These are probability worked backwards: its "inverse operation." What happens is that a person notices a ratio that looks like a probability ratio in what otherwise seems random; he then hypothesizes that there is some "constant underlying structure" and goes looking for it. If he finds one, he then says that the statistics are "valid" and makes predictions on the future behavior.
For instance, a scientist notices that teen agers have more automobile accidents that married middle-age men, say. Is this greater number of accidents per number of miles driven just an accident (pure randomness) or is there something in the nature of the teen ager as opposed to the middle-aged driver that would explain it?
Well, teen-agers have quicker reflexes (which if anything would argue against more accidents), but are more apt to try to test them, and are less apt to be aware that actions sometimes have irrevocable consequences. These last two characteristics of teen agers would lead you to expect them to be more reckless when driving, and so to get into more accidents. Hence, the statistics are probably valid; there does seem to be a "constant underlying structure" putting constraints on what otherwise would be random.
Similarly with smoking and lung cancer. The "tar" in tobacco smoke, when isolated, is clearly toxic, and when injected into animals results in increased cancers. You would therefore expect that taking this stuff into your lungs would cause damage, and specifically lung cancer. But smokers do get more lung cancer than non-smokers; and hence the statistics are valid, and the tobacco-company hypothesis (that both are effects of a more remote cause) is a smoke screen.
The reason some statistics are valid and some aren't is that probability-like ratios can occur by chance, where there are no underlying constraints. For instance, the statistical ratios of "average evaluations" of the professors I mentioned a while back is simply a number which a little thought will show does not reveal how good the teacher "really is" as if this were the constant underlying structure which gave rise to the ratio, or even how good the teacher was really "thought to be" by the students.
In fact, I once did a study of some two hundred evaluations (confirmed by other studies) and I found a very strong correlation between the grade the student expected and how highly he ranked the professor (in almost any area you want to name--such as "sense of humor"). Now it "stands to reason" that a student who expects to get a good grade in a course is going to think the teacher is pretty good (even if he finds the course boring), and one who expects to fail is going to blame the teacher, not himself. And so, given the psychology of the student, you would expect this correlation, whether the teacher is actually any good or not. And it occurs.
The point is that (a) a ratio might be just chance--after all, if there are two numbers that can be set into relation with each other, they have to come out to some ratio; and (b) that it might indicate a constant underlying structure that is very different from the superficially obvious one.
To find this latter, you would have to perform experiments, varying what they call the "parameters" (the things that can vary, some of which might not make a difference) and seeing which things affect the results and which don't. Only then do you have some hope that your statistics are valid.
So statistics reveal a constant underlying structure which forms the cause of the observed ratio.
NOTE that this means that things that are describable statistically are so describably NOT because of "chance" but because of the NATURE of the thing operating.
The "nature," of course, is the "constant underlying structure." So when statisticians find things about teen agers and accidents, they do so on the basis of the nature of teen agers; when the Surgeon general gives statistics about smoking and lung cancer, he has revealed something of the nature of smoking, and so on. These things don't deal with the random aspect of what is operating; they focus on its non-random element.
Hence, our theory of effect and cause can explain a good deal about probability and statistics. It sounds as if we have a rather powerful theory going for us.
3.3. Fourth step: theory
But this brings up the next "step" of scientific method, which isn't really a step at all, but just a name. A hypothesis that has survived the experiment stage isn't called a hypothesis any more, but a theory.
DEFINITION: A THEORY is a detailed statement of what is thought to be the cause of the effect in question.
As long as we have defined theory, and in our discussion on probability theory we were talking about the "laws" of probability, we might as well define a law.
DEFINITION: A LAW is a constant relationship that obtains in reality.
Laws are facts; relationships "out there." Now if a theory is so well verified that it is taken to be a fact, it is sometimes called a "law"; like, for instance Newton's "law of gravitation." (As a matter of fact, this "law" is false; but it was assumed until the beginning of this century to be unassailably the case.)
But law and theory don't mean the same thing. A law is a relationship, whether it explains anything or not, so long as it is constant. A theory always is an explanation, whether it involves a constant relationship or not.
Obviously most theories will talk about what are supposed to be constant relationships (because they explain the effects, so that there is a constant relationship such that whenever the effect occurs, the cause is there--and the theory states what it thinks the cause really is); but laws can just be observed connections.
For instance, Boyle's and Charles's laws of gases state that as temperature increases either the pressure or the volume of the gas (depending on which law) increases in a definite ratio to the increase of temperature. This was just observed as a fact. Take a gas (of a certain type), put it at the freezing point of water, raise it one degree Celsius, and you will find that if it's in an expandable container, it will expand; and if it isn't, the pressure on the container will increase by 1/252.
The kinetic-molecular theory of gases explains this law: it says that heat is motion of molecules; but if molecules are moving faster, then they will need more room to move around in, and will hit the container harder. Thus, expansion and/or pressure increase.
3.3.1. Criteria for a good theory
Now then, according to the canons of scientific method, a theory, in order to be a good one, has to be (1) simple, (2) comprehensive, and (3) logical; it is also held that unless a theory makes predictions, it really isn't much in the way of being a theory. We will discuss all of these.
First of all, as I mentioned, "simplicity" does not mean that the theory is easy to understand or doesn't involve complex logic; it means that the theory doesn't assert the existence of very much that can't be observed.
The reason for this (based on our theory of science) is that a theory is an explanation of an effect, and hence is something that makes sense out of what otherwise doesn't make sense. In other words, it makes reasonable what is otherwise unreasonable.
Now in discussing probability, we saw that chance doesn't explain anything; the only reasonableness (or "lawfulness") about probability doesn't come from the chance element, but from the constant constraints upon it. What "just happens" may be true, but there's nothing satisfying to reason in an event that's simply a fact.
Therefore, if a theory states three or four or five facts, each independent of one another, as the "explanation" of the effect in question, then no one of the facts explains by itself, but all five together form the "real" explanation.
But if these events are independent of each other, then the explanation hinges upon the fact that the five of them just happen to be operating together. In other words, the "explanation" ultimately rests on chance--or the coincidence of the five elements.
Hence, the more elements you get in your theory, the greater the role chance takes in the "explanation"; but chance doesn't explain something--and so your theory is a bad theory.
Of course, if the many elements in the theory are connected by some fact, then it is this (single) connection among the elements that explains why they are present together; and so the connection becomes the simple fact that is the real basis of the explanation.
So it isn't because a coincidence of many factors can't in fact produce results that good theories have to be simple; but if they do, there is no way to get at this coincidence by reason. You might just as well have stopped with the effect and said, "Well, it happened somehow" for all the satisfaction your mind is going to get out of an "explanation" that "just happened."
And this is why, of course, scientists aren't really like detectives. An actual murder, for instance, very often hinges on the chance coming together of a number of independent elements, where people do improbable things because they just happened to be in an odd mood at a time when someone else just happened to say something that lit the fuse, and some passer by just happened to be someone with a very strong motive for wanting the victim dead, and had threatened him the night before, and so on--and it rained, washing away the clues; and the weapon caught in a branch as it was thrown into the stream, and on and on. To find out what actually happened in a case like this depends as much on luck as on ingenuity. Anyone who is at all intelligent and has read all but the absolutely best detective fiction can figure out a number of other ways of solving the riddle involving someone else than the author's villain.
So the romance of science is not really true; scientific theories are simple, not because the truth is simple, but because that's the only kind of thing our minds can make any progress with in making sense out of what doesn't make sense by itself.
And so our theory explains why theories are simple, but why the more complicated cause might in a given case actually be the true one--and our theory is based on the simple fact that scientists are looking for the true explanation of the effect.
Secondly, a good theory has to be comprehensive: that is, explain all the elements of the effect that it is supposed to be the explanation of.
This sounds trivial; and it might seem that we already saw it when we were discussing the experiment stage of the method. Any theory that leaves some facts unexplained, of course, leaves something about reality self-contradictory or impossible; and therefore, it is no explanation.
But the theory has to explain all the aspects of the actual effect, not just the ones that were thought by the scientist to be the aspects of the effect when he made his observation.
That is, there may be aspects of the effect that no one was aware of at the time of making the experiments and formulating the theory, and these aspects might change the whole nature of the effect (and so make the "cause" expressed in the theory not the actual cause at all).
And this has happened in science. Not the least notorious case is that of Newton's theory of universal gravitation. Newton theorized that what made bodies fall was, as I mentioned, a force that was proportional to the products of the masses and inversely proportional to the square of the distances between the bodies' centers of mass. This theory also explained why orbiting bodies stay in orbit; basically, they are falling toward the main planet, but they have such a great speed (initial tangential velocity) in a straight line that they "miss the edge" in their fall, so to speak, and fall "around" the body instead of into it.
Well, the point is that the theory explained very accurately all the motions of the planets, once you took into account that their orbits would be affected by the pulls of other planets as well as the sun. And all was rosy.
All, that is, until the beginning of this century, when extremely accurate measurements of the orbit of the planet Mercury were made. It turned out that Mercury's orbit was off from what Newton said it should be by a matter of a few seconds of arc per century. (For those who are curious, an angle of ninety degrees, drawn at the center of a circle, cuts through the circumference enclosing an arc of that circumference: an arc of ninety degrees. An angle of one second is a sixtieth of a sixtieth of a degree; so a second of arc is a very small distance indeed. I have never looked up the actual linear distance, but it might have been just a few miles from where Newton said it should have been.)
The point is, though, that this tiny discrepancy between what the theory said the facts had to be and what the facts actually were destroyed the theory. The force of gravity couldn't be the cause that explained why Mercury traveled round the sun.
Of course, scientists noticing this were in a quandary, and were more ready to doubt the observations than the theory--until Einstein came along with an alternative explanation (based, as I mentioned, on a warping of space-time) which accounted for the whole of the motion of Mercury as well as all that Newton could account for. Einstein's theory is comprehensive; Newton's isn't--even though Newton thought his theory explained all the facts, and so did everyone else for a couple of centuries.
The third criterion of a good theory is that it be "logical," which means that the effect in all of its aspects should follow logically from what is stated to be the cause (i.e. if the cause is what it is stated to be, then all aspects of the effect would have to be what they are observed to be--either now or in the future). This is another triviality, from the point of view of our theory of science.
But I would like to discuss here the problem of induction, which is a logic used in science, and which, it seems, cannot be a logic, because it seems to violate one of the principal canons of logic. We will formulate the effect, and give several hypotheses, rejecting all but what I think is the true one--which follows from our notion of cause and effect.
The effect connected with induction is this: First, induction cannot be a form of logic, because induction argues from observing a few instances of what something does to what every instance of that type of thing does--and one of the main principles of logic is that you can't argue "from the particular to the universal" (from "some" to "all").
Yet induction has to be a form of logic, because, when we say, for instance, that hydrogen combines with oxygen to form water, we (a) have directly observed only a few instances of hydrogen's doing this; but (b) we know that--under the proper conditions--any instance of hydrogen will do it. We have to "know" this by some kind of reasoning, because (given that hydrogen is the most plentiful element in the universe), we certainly didn't know it from observation.
There are several hypotheses that have been offered to explain this effect. Let us look at them.
First, there is David Hume's solution, which is basically that we don't know that all instances of hydrogen combine with oxygen to form water. All we know is that the ones we observed have done it. But since every time we have brought hydrogen and oxygen together in the past, we got water as a result, then we have built up a habit or expectation of seeing it happen in the future; and so we (mistakenly) suppose that somehow it "has" to happen or that it "will" happen every time we try. but this is a supposition or a belief (or perhaps a hope), not knowledge.
This, however, is the equivalent of saying that induction is like saying, "All the living things in this room are human," because you've looked and all you see are people. But then someone shows you a spider hiding under the sofa; and you simply say, "Well, not all the living things in this room are human, then."
But if a scientist took a bottle labeled "Hydrogen" and combined it with oxygen and what he got was a green gas, he wouldn't say, "Well, not all hydrogen combines with oxygen to form water"; he would say, "That's funny; there's supposed to be hydrogen in that bottle."
That is, induction results in what has been called a "lawlike generalization," where the person who holds it will deny observed instances that seem to violate it before he will give it up as true. (Lawlike generalizations, as they say, "support counterfactual instances."). The scientist will hold on to the results of his induction until the evidence against it becomes overwhelming. One or two cases will not make him give it up as false--the way a simple observation like the living things in the room will.
But this hypothesis basically puts these two kinds of general statements on the same footing, allowing no way to distinguish one from the other. But we do make the distinction.
So--sorry, Mr. Hume, but your hypothesis doesn't fit the facts you were trying to explain.
Second, to account for the difference, some philosophers have said that what happens in cases like hydrogen and other "lawlike" generalizations is that, once we get to the general statement, we define the object we are dealing with as "whatever-it-is-that-does-such-and-such"; and then, obviously, anything like the object that doesn't do it falls outside the definition we made, and so isn't what we are talking about.
What I mean is this. You observe some stuff combining with oxygen to form water, and you say, "Let's call anything that combines with oxygen to form water 'hydrogen.'" Then it will have to be the case that all instances of hydrogen as you defined it will combine with oxygen to form water. If something doesn't, then it doesn't fit your definition of "hydrogen."
The trouble with this is that it will allow you to name only one property of the object in question. As soon as you make an induction and discover a second property of the same object, you can't use your "Let's call...whatever does..." any more, because you've already done this, and (a) if you define your object as "what does both things," you won't know that you've caught every instance of what does the first thing, and (b) you don't know from observation that the two properties will always go together. But in fact, the scientist does.
That is, a scientist is studying the spectrum of hydrogen (i.e. the stuff that combines with oxygen to form water). He notices that in all the cases he observes, when it burns, it produces blue lines on the spectroscope. He then concludes that "All cases of hydrogen have such-and-such lines in the blue region of the spectrum."
With the "definition" hypothesis, he can't know this. If he now defines "hydrogen" as "whatever has these spectral lines," how does he know that this is also in every case the stuff that combines with oxygen to form water? And the stuff that (based on other observations) combines with sulfur to make that gas that smells like rotten eggs? And the stuff that combines with chlorine to make hydrochloric acid? And so on.
The scientist is supremely confident that when you've got one of these properties, you've got all the rest too. But the "definition" hypothesis will explain the universality (the "allness") only for one property. So this doesn't work.
Third, some philosophers have said that the "all" isn't really "all," but a probability statement. That is, the scientist observes hydrogen and oxygen combining to form water; and it works every time he tries. What his "All hydrogen combines with oxygen to form water" really means, according to this hypothesis, is "The probability is very high that any instance of hydrogen is also going to exhibit this behavior."
This sounds promising, until we look at it. Hydrogen--at least the stuff with the blue spectral lines--is, as I said, the most plentiful element in the universe. But the scientist has only observed as combining with oxygen (by a conservative estimate) a billion billionth of a percent of all the hydrogen there is; and he has observed this only on the earth, and in the very special conditions of the laboratory.
But you can only make a statistical generalization of probability when you have observed what is known to be a representative example. But this means (a) that it has to be a fairly hefty percentage of the total "population," and (b) that it can't be observed in special conditions, which might make the observed sample behave unrepresentatively. That is, you don't go into a Democratic party rally and ask the people there who they're going to vote for and conclude from this that the Democrat will win in a landslide, because everyone in the country is going to vote for him.
But these two conditions for using statistics to form generalizations are precisely what are not present with the observation of hydrogen. Hence, based on statistics and probability, we are at the Democratic rally, and it is exceedingly UNlikely that hydrogen combines with oxygen to form water.
So that doesn't work either.
What does work? (a) The scientist observes enough instances of the behavior of some object to give him a subjective impression that the behavior isn't just chance.
How many is this? Well, if it's something like voters, whose behavior is erratic, he knows he has to observe a lot of them in varying circumstances. If it's something like hydrogen, which seems to behave the same way all the time, he doesn't think he has to observe many instances. This is all rather subjective at this stage.
(b) If the behavior isn't due to chance, then it has a--here's the word--cause which explains its constancy. So the scientist hypothesizes that there's a cause involved.
(c) And where would he look to find it? Obviously in the thing that's doing the constant behaving (the "constant underlying structure" again, only now it's not accounting for the lawfulness of random acts but the lawfulness of constant acts).
(d) If there is something about the structure of the object that would make it reasonable to expect the behavior (that would logically result in the behavior), then the scientist concludes
(e) That in every instance where you have an object with this structure, you will get the behavior.
...And there we are. Induction makes sense in terms of cause and effect, and in our realizing that the "nature" of the object is what explains its constant behavior.
Since the other hypotheses don't really explain induction, and since induction is the way the scientist gets his general statements--and so is a good part of scientific method--it sounds as if our theory of science is simple, comprehensive, and (because you would expect it to explain this aspect) logical.
There's more to induction than this, but let's leave it here.
Scientists often use "models" in a theory; and so our theory of science should explain why they find them useful.
First of all, I think I should say that a "mathematical model" is not really a model, but a mathematical description of the behavior in question. It's just a statement of what is going on in terms of numbers and their interrelations, rather than in terms of words. There's really no problem here.
But describing, for instance, electrons as little pellets moving about and hitting each other, and so on is a model: the "particle-model" of the electron. Electrons are too small to see; but this model says that they're like little billiard balls.
Or then there's the "planetary model" of the atom, where the nucleus is like a complicated sun and the electrons are like little planets whizzing around it. It makes a neat picture to contemplate, but is it of any scientific use?
Some theoreticians of science say that's all a model is: a metaphor that makes things exciting, especially to the readers of Sunday supplements; but it's really not anything significant scientifically at all; it gets the scientists funded, perhaps, but no more.
Yet the particle model of the electron actually led to discoveries about the electron, and the planetary theory of the atom to discoveries about the atom, even though now there is also a wave-model for the electron (which has not displaced the particle one) and there is a "shell" model of the atom which has supplanted the old planetary one. Evidently, the models are useful.
But when you say, "John is a lion" or talk about "the smiling meadow" (meaphors), you learn nothing by observing John or the meadow. Where is the mane on John? Where are the teeth of the meadow? Obviously, the characteristics of the "model" in this case are no help to tell you about the thing you're wanting to learn about.
So models are not metaphors. Metaphors involve emotional similarities, so that John makes you afraid the way a lion does and the meadow makes you feel the same kind of pleasure you feel when smiled at.
Models are analogies.
Here is the solution. We notice that the (unobserved) cause has effects which are similar to some effect of a causer which we can observe. Since similar effects have analogous causes, the unobserved cause must be similar in some unknown way to the cause in the observed causer.
Thus, an electron's equation of motion is similar to the equation of motion of a dust particle in the air, say. Then an electron must be somehow like a dust particle, and by studying the dust particle, we might learn something about electrons.
But an electron's equation is also like the equation of the wave in a pond, in certain respects. Then an electron must be somehow like a wave; and by studying waves (and how they interfere with each other, for instance) we can learn about electrons--maybe.
Of course, in the world we can observe, particles can't simultaneously be waves, because waves are a disturbance of some larger body, and a particle is a little body in its own right.
But the model is only an analogy, and only says that the electron (as cause) is like, somehow the particle, and like, in other (unknown) ways the wave. Obviously, the two are somehow compatible in the electron as it exists; because an electron isn't really either a particle or a wave, but is only really similar in an unknown way to both.
So our theory of cause and effect, which includes analogy, explains why scientists use models, why they aren't just metaphors and why you can learn things from them, and why they aren't too terribly useful--because we don't know the precise points of identity and difference.
3.5. Last step: verification
We finally come, in our tracing through scientific method, to what is the most important thing scientists do to separate their pursuit of the cause from speculation as to what it might be. So far, all we have seen has given us an explanation which is internally consistent (not self-contradictory) and which logically explains all of the data observed. But there still are an infinity of possibilities that can do this. True, we have picked the simplest of those we have been able to see; but this (as I mentioned) still doesn't mean the explanation we picked is the true one. How do we come closer to this goal?
Scientists consider that a theory which doesn't predict anything which can then be tested is a theory which doesn't significantly differ from pure (if careful) speculation. Non-predicting theories may be the best we can do in a given case; and sometimes we have to live with them--as, for example, the theory that the universe began at a certain time some billions of years ago in an enormous explosion. We can't have conditions that would reproduce this so we could test it.
But even in these cases, the theory will generally predict something, and very often this "something" is open to a kind of experiment, which will see if it actually occurs or not.
Why is it that scientists are so confident that if they examine any theory hard enough, they will find hitherto unobserved facts predicted by the theory, and then can test the theory by looking to see if these indeed are facts or not?
Once again, the notion of effect and cause comes to the rescue; only this time, the solution lies in the nature of the cause, not the effect. The cause (or any explanation), you will remember, is the "if-part" of the "if-then" logical statement. We saw that, when looking from the "then" to the "if," that there are an infinity of possible "if's" that could explain the particular effect we observed.
But now, if we look at the statement the other way, it is generally the case that the "if" statement need not logically imply ONLY the contents of the "then" statement. That is, the statement "If it is raining out, then the cat is in the house" is such that (supposing it to be true) the fact of it's being raining out means that it also must be true that the cat is inside. This is what is meant by "implies." But it doesn't mean that this is the only implication of the fact that it's raining out. The fact that it's raining out also implies that the ground is getting wet, that there are clouds overhead, that people are putting up umbrellas, etc., etc.
That is, there are an infinity of possible IMPLICATIONS for a given "if" in an "if-then" statement, of which the "then" named is only one--just as there are an infinity of possible "if's" for a given "then."
Thus, any scientific theory, which is of the form "if (cause), then (already observed effect)" will have OTHER implications beyond the already observed data.
DEFINITION: A PREDICTION from a scientific theory is AS YET UNOBSERVED IMPLICATION from what the theory asserts as the "cause" of the original effect.
These predictions may be of two types: The theory may predict events or "facts" not yet observed at all. Thus, Newton's theory of gravitation predicted that the rate of fall of bodies on other planets would be different from that of the earth--at a time when, obviously, no one had ever observed any other rate of fall (and it was believed no one would ever be in a position to observe one).
But secondly, the theory may predict facts that are already known to be facts, but were not known to have any connection with the cause alleged in the theory. Thus, Newton's theory of gravitation predicted the elliptical orbits of planets. These orbits were pretty accurately known ever since the time of Johann Kepler (Galileo even is said to have laughed at him for thinking that orbits would be anything but circular); but no one had ever thought to connect the ellipticality of orbits with the tendency of bodies to fall down when dropped. Thus, the Keplerian orbits of the planets were a prediction from Newton's theory, which was one of the things that made scientists accept it as almost certainly giving the cause of falling bodies. (It didn't, of course, as we saw.)
The point of the prediction is that IF the theory states the true cause, ALL the predictions of both types MUST ACTUALLY BE FACTS.
We saw, remember, that the logic of "if-then" is that, given the truth of the "if," the "then" must be true, or the "if-then" connection itself is false.
Hence, if ANY ONE of the predictions from a theory turns out NOT to be a fact, the theory is falsified; it cannot be stating what really is the cause.
And since there are an infinity of possible predictions from a theory, this offers a fertile field for investigation. Certainly some of these predictions must be observable; and if they are, you can go looking to see if they actually occur. If they don't, you can throw out the theory.
It is also the case that, the LESS LIKELY some predicted fact is to be a fact on any other assumption by a theory, the more likely it is that if this fact occurs, the theory is expressing the real cause.
DEFINITION: VERIFICATION is the process of observing to see whether predictions from the theory are actually facts or not. It is a kind of experiment performed on the predictions from the theory.
Let me illustrate these last few statements by Newton's gravitation theory and Einstein's relativity theory.
First, we note that Newton's gravitation theory predicted the orbits of the planets as elliptical. It also predicted that these orbits would behave in special ways (would "precess," to be technical) because of the gravitational attraction of the other planets as well as that of the sun. This also was observed. The theory also predicted how much this precession would be (though the mathematics of figuring it out, given all the planets, was formidable).
Here is where, as I mentioned, Newton's theory came a cropper. His prediction of Mercury's orbit was off by an infinitesimal amount; but the fact was that it predicted that Mercury would have to be in a certain place at a certain time, and it wasn't there. Clearly, the facts aren't at fault here, and so the theory had to be wrong.
Einstein then developed the theory that falling wasn't due to a force, but constant acceleration (i.e. constant increase in speed) is the natural way bodies move. But they move along the path of space-time; and in the presence of massive objects, this path is curved, so that the "natural fall" of something like Mercury is along the space-time path that (he predicted) would look like a certain shape. This shape was the orbit that was observed, which Newton's theory of a force missed.
But if space-time itself is curved, then light (which travels, of course, through space) would also have to follow the curve of space-time, and so would not travel in what we normally think of as a straight line. So this theory predicts "curved" trajectories for the light from a star, say, as it passes close by the (massive) sun on its way from the star to us. But since our eyes and seeing apparatus (like telescopes) are so constructed that we see things as if the light were traveling in a straight line (as the bent light from the oar dipped in water makes us see the oar as bent at the surface), then the way this "traveling in a curved path by the sun" would appear would be that the star would appear to be shifted from its normal position like the tip of the oar after you dip it in water.
But how to observe this? The sun is so bright that you can't see stars whose light passes near it. But during an eclipse, the sun is darkened enough so that telescopes can see stars which are very close to the edge of the sun: close enough so that the predicted shifting of apparent position would be observable.
And the stars did appear shifted out of the positions we knew they would appear to be in if the sun weren't there. Now this jumping around of the apparent positions of the stars is a fantastic event in itself; an effect that definitely needs an explanation. And light has no mass (no "rest mass," as they say nowadays), and so couldn't be attracted by any force. And the shifting was exactly as much as Einstein's theory predicted.
Put all this together, and it is very unlikely that Einstein's theory could have come up with an explanation that (a) was intended to explain the facts in question, (b) predicted an event not yet observed, which was extremely unlikely in itself, (c) predicted it exactly, (d) and the event was then observed to be exactly as predicted; and (e) was not the true explanation, but the prediction and the event's being as predicted was sheer coincidence.
You can see that (e) is possible, but, especially since the event is really very unlikely in itself, it is fantastic to assume that Einstein just happened by chance to predict it--especially when his theory explains all of the other data dealing with the heavenly motions, including those that Newton's couldn't.
And that is why predictions are so useful in science. If the theory predicts a "fact" that turns out not to be a fact, we can throw out the theory as maybe good speculation, but not really stating the cause.
But if it predicts an event which is unlikely on any other supposition, then it becomes very likely that the theory does state what really is the cause of the effect.
Now of course, Newton's gravitation theory should be a caution here. This prediction of what does occur does not PROVE that the theory IS true, still less that it MUST be. It simply makes it very likely that it is true. It still could be an explanation that is very close to the truth, but not really the truth.
NOTE that the verification process NEVER PROVES a theory to be TRUE. It can prove a theory false, but no theory is ever really totally verified.
It is always possible, then, for a scientific theory to be overthrown; though it may be extremely unlikely, depending on how many otherwise improbable events have been verified.
3.6. A prediction from this theory
So our theory, based on the simple assumption that scientists, confronted with apparently self-contradictory sets of facts, try to find the fact that makes sense out of this effect, has explained all the steps of the scientific method with their observed details, and while it was at it, made sense out of probability and induction. It sounds like a good theory. As far as I know, there is nothing that science does as such that is not predictable from this theory; if anyone finds anything, I would appreciate knowing it, so that the theory could be altered or scrapped in favor of something that fits all of the facts.
But there is a prediction that I would like to make from this theory. If science is based on apparently contradictory sets of facts, it would follow that there might be other contradictions appearing than are able to be handled by any of the sciences we know of.
For instance, there are problems connected with the mere fact that things change, irrespective of any specific way they change: How can something "turn into" something else, so that the "something else" is what used to be what it now isn't? Now, granting that this isn't just playing with words and is a real effect, it can't be handled by (a) physics, because physics doesn't deal with changes of one kind of thing into another kind of thing, but just changes of state; (b) chemistry, because this deals only with chemical changes, not physical or biological ones; (c) biology, because its changes are different from those of physics and chemistry--and so on. There is no science that deals with change as such.
Again, every science assumes that there is a world "out there" which we can observe, and say things about as it actually is. But if our perceptions are affected, not only by the world "out there" but by the conditions under which we perceive it, how can we say things about the world as it is in itself? But no science can handle this, because every science starts from certain observations of what is "out there." Even the psychological science of perception starts, not from the perception itself, but from observations of stimuli and reports of perceptions by subjects.
So there are important problems not handled--or even handleable--by any of what are called the "sciences." And some of these effects are vital to our lives. Is there, really, something that makes it make sense for a person to act honestly when it is greatly to his advantage to act dishonestly? Are we really free and in basic control of our lives, or is this inescapable idea that we are free an illusion, and we are the puppets of our environment and heredity? And so on.
These are general questions, but important ones. There ought to be a scientific way to handle them, so that we can come up with verifiable theories and falsify ones that don't work.
And so this theory predicts a scientific approach to philosophy. It should be possible to take these philosophical issues, state them as effects, develop hypotheses about what the cause is, test to see if these hypotheses fit the facts observed, and then predict other supposed "facts" from the theory reached, and look to see if these predictions are verified.
This would make philosophy--which has been hitherto regarded as pure speculation--into something scientific, where we could at least reject philosophical theories that predicted "facts" that simply don't occur.
And I have tried this method, and it works, I think. (In fact, this theory of science is a good example of a verifiable philosophical theory, as we saw: it explains all that the other theories could explain, much that they couldn't, and predicts philosophical method, which, I think you can saw, is verified.) The rest of the book will be some of the results of what I have done dealing with general issues connected with bodies and how they change. Of course, whether there are other predictions of this theory of science which cannot be verified, only time will tell.